Search Results

Search found 10106 results on 405 pages for 'fail fast'.

Page 338/405 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Stop Visual Studio from appending numbers to the end of new controls

    - by techturtle
    I am wondering if there is any way to stop Visual Studio 2010 from appending a number to the end of the ID on new controls I create. For example, when I add a new TextBox, I would prefer that it do this: <asp:TextBox ID="TextBox" runat="server"> <asp:TextBox ID="TextBox" runat="server"> <asp:TextBox ID="TextBox" runat="server"> Instead of this: <asp:TextBox ID="TextBox1" runat="server"> <asp:TextBox ID="TextBox2" runat="server"> <asp:TextBox ID="TextBox3" runat="server"> It would make it easier to rename them appropriately, so I don't have to arrow/mouse over and delete the number each time. As I was writing this, the "Questions that may already have your answer" suggested this: How do I prevent Visual Studio from renaming my controls? which admittedly was the biggest part of my annoyance, but that appears to turn off putting in an ID="" field altogether, not just for pasted controls. It would still be helpful to turn off the numbering for new, non-pasted controls and have it not rename pasted ones as well. At the moment I'm working with ASP.NET, but it would be nice if it there was a way to do it for WinForms as well. Before anyone suggests it, I do know that allowing it to append the numbers prevents name conflicts should I not rename them appropriately. However, I would much rather have it fail to compile so I know to fix the issue now (if I forget to name something properly) rather than find random "TextBox1" items lying around in the code later on.

    Read the article

  • How to remove a "green screen" portrait background

    - by danbystrom
    I'm looking for a way to automatically remove (=make transparent) a "green screen" portrait background from a lot of pictures. My own attempts this far have been... ehum... less successful. I'm looking around for any hints or solutions or papers on the subject. Commercial solutions are just fine, too. And before you comment and say that it is impossible to do this automatically: no it isn't. There actually exists a company which offers exactly this service, and if I fail to come up with a different solution we're going to use them. The problem is that they guard their algorithm with their lives, and therefore won't sell/license their software. Instead we have to FTP all pictures to them where the processing is done and then we FTP the result back home. (And no, they don't have an underpaid staff hidden away in the Philippines which handles this manually, since we're talking several thousand pictures a day...) However, this approach limits its usefulness for several reasons. So I'd really like a solution where this could be done instantly while being offline from the internet.

    Read the article

  • Can somebody explain this remark in the MSDN CreateMutex() documentation about the bInitialOwner fla

    - by Tom Williams
    The MSDN CreatMutex() documentation (http://msdn.microsoft.com/en-us/library/ms682411%28VS.85%29.aspx) contains the following remark near the end: Two or more processes can call CreateMutex to create the same named mutex. The first process actually creates the mutex, and subsequent processes with sufficient access rights simply open a handle to the existing mutex. This enables multiple processes to get handles of the same mutex, while relieving the user of the responsibility of ensuring that the creating process is started first. When using this technique, you should set the bInitialOwner flag to FALSE; otherwise, it can be difficult to be certain which process has initial ownership. Can somebody explain the problem with using bInitialOwner = TRUE? Earlier in the same documentation it suggests a call to GetLastError() will allow you to determine whether a call to CreateMutext() created the mutex or just returned a new handle to an existing mutex: Return Value If the function succeeds, the return value is a handle to the newly created mutex object. If the function fails, the return value is NULL. To get extended error information, call GetLastError. If the mutex is a named mutex and the object existed before this function call, the return value is a handle to the existing object, GetLastError returns ERROR_ALREADY_EXISTS, bInitialOwner is ignored, and the calling thread is not granted ownership. However, if the caller has limited access rights, the function will fail with ERROR_ACCESS_DENIED and the caller should use the OpenMutex function.

    Read the article

  • Tracking down slow managed DLL loading

    - by Alex K
    I am faced with the following issue and at this point I feel like I'm severely lacking some sort of tool, I just don't know what that tool is, or what exactly it should be doing. Here is the setup: I have a 3rd party DLL that has to be registered in GAC. This all works fine and good on pretty much every machine our software was deployed on before. But now we got 2 machines, seemingly identical to the ones we know work (they are cloned from the same image and stuffed with the same hardware, so pretty much the only difference is software settings, over which I went over and over, and they seem fine). Now the problem, the DLL in GAC takes a very long time to load. At least I believe this is the issue, what I can say definitively is that instantiating a single class from that DLL is the slow part. Once it is loaded, thing fly as they always have. But while on known-good machines the DLL loads so fast that a timestamp in the log doesn't even change, on these 2 machines it take over 1min to load. Knowns: I have no access to the source, so I can't debug through the DLL. Our app is the only one that uses it (so shouldn't be simultaneous access issues). There is only one version of this DLL in existance, so it shouldn't be a matter of version conflict. The GAC reference is being used (if I uninstall the DLL from GAC, an exception will be thrown about the missing GAC reference). Could someone with a greater skill in debug-fu suggest what I can do to track down the root cause of this issue?

    Read the article

  • iPhone: Low memory crash...

    - by MacTouch
    Once again I'm hunting memory leaks and other crazy mistakes in my code. :) I have a cache with frequently used files (images, data records etc. with a TTL of about one week and a size limited cache (100MB)). There are sometimes more then 15000 files in a directory. On application exit the cache writes an control file with the current cache size along with other useful information. If the applications crashes for some reason (sh.. happens sometimes) I have in such case to calculate the size of all files on application start to make sure I know the cache size. My app crashes at this point because of low memory and I have no clue why. Memory leak detector does not show any leaks at all. I do not see any too. What's wrong with the code below? Is there any other fast way to calculate the total size of all files within a directory on iPhone? Maybe without to enumerate the whole contents of the directory? The code is executed on the main thread. NSUInteger result = 0; NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSDirectoryEnumerator *dirEnum = [[[NSFileManager defaultManager] enumeratorAtPath:path] retain]; int i = 0; while ([dirEnum nextObject]) { NSDictionary *attributes = [dirEnum fileAttributes]; NSNumber* fileSize = [attributes objectForKey:NSFileSize]; result += [fileSize unsignedIntValue]; if (++i % 500 == 0) { // I tried lower values too [pool drain]; } } [dirEnum release]; dirEnum = nil; [pool release]; pool = nil; Thanks, MacTouch

    Read the article

  • web service slowdown

    - by user238591
    Hi, I have a web service slowdown. My (web) service is in gsoap & managed C++. It's not IIS/apache hosted, but speaks xml. My client is in .NET The service computation time is light (<0.1s to prepare reply). I expect the service to be smooth, fast and have good availability. I have about 100 clients, response time is 1s mandatory. Clients have about 1 request per minute. Clients are checking web service presence by tcp open port test. So, to avoid possible congestion, I turned gSoap KeepAlive to false. Until there everything runs fine : I bearly see connections in TCPView (sysinternals) New special synchronisation program now calls the service in a loop. It's higher load but everything is processed in less 30 seconds. With sysinternals TCPView, I see that about 1 thousands connections are in TIME_WAIT. They slowdown the service and It takes seconds for the service to reply, now. Could it be that I need to reset the SoapHttpClientProtocol connection ? Someone has TIME_WAIT ghosts with a web service call in a loop ?

    Read the article

  • Smallcaps / multiple fonts and bolding using 'DrawString' in GDI+

    - by Simon_Weaver
    I want to write out some text using smallcaps in combination with different fonts for different words. To clarify I might want the message 'Welcome to our New Website' which is generated into a PNG file for the header of a page. The text will be smallcaps - everything is capitalized but the 'W', 'N' and 'W' are slightly larger. The 'New Website' will be in a different font than the rest of the text. Is there a way i can do this without doing it completely manually? Doing something like this is conceptually what I want to do : graphics.DrawString("<font size=2>W</font>ELCOME TO OUR <b><font size=2>N</font>" + "EW <font size=2>W</font>EBSITE</b>"); The best approach I could find so far is here, but I'm worried that I'll go to all the trouble to do this manually and end up with some horrible kerning or tracking problems. Edit: I should have mentioned that this is being done within ASP.NET so it needs to be fast and as lean as possible. I want it to be automated so I can localize easily and not have to create tonnes of little images.

    Read the article

  • Dice Emulation - ImageView

    - by Michelle Harris
    I am trying to emulate dice using ImageView. When I click the button, nothing seems to happen. I have hard coded this example to replace the image with imageView4 for debugging purposes (I was making sure the random wasn't fail). Can anyone point out what I am doing wrong? I am new to Java, Eclipse and Android so I'm sure I've probably made more than one mistake. Java: import java.util.Random; import android.app.Activity; import android.os.Bundle; import android.view.View; import android.widget.ArrayAdapter; import android.widget.ImageView; import android.widget.Spinner; import android.widget.Toast; public class Yahtzee4Activity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Spinner s = (Spinner) findViewById(R.id.spinner); ArrayAdapter adapter = ArrayAdapter.createFromResource( this, R.array.score_types, android.R.layout.simple_spinner_dropdown_item); adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); s.setAdapter(adapter); } public void onMyButtonClick(View view) { ImageView imageView1 = new ImageView(this); Random rand = new Random(); int rndInt = 4; //rand.nextInt(6) + 1; // n = the number of images, that start at index 1 String imgName = "die" + rndInt; int id = getResources().getIdentifier(imgName, "drawable", getPackageName()); imageView1.setImageResource(id); } } XML for the button: <Button android:id="@+id/button_roll" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/roll" android:onClick="onMyButtonClick" />

    Read the article

  • Efficient database access when dealing with multiple abstracted repositories

    - by Nathan Ridley
    I want to know how most people are dealing with the repository pattern when it involves hitting the same database multiple times (sometimes transactionally) and trying to do so efficiently while maintaining database agnosticism and using multiple repositories together. Let's say we have repositories for three different entities; Widget, Thing and Whatsit. Each repository is abstracted via a base interface as per normal decoupling design processes. The base interfaces would then be IWidgetRepository, IThingRepository and IWhatsitRepository. Now we have our business layer or equivalent (whatever you want to call it). In this layer we have classes that access the various repositories. Often the methods in these classes need to do batch/combined operations where multiple repositories are involved. Sometimes one method may make use of another method internally, while that method can still be called independently. What about, in this scenario, when the operation needs to be transactional? Example: class Bob { private IWidgetRepository _widgetRepo; private IThingRepository _thingRepo; private IWhatsitRepository _whatsitRepo; public Bob(IWidgetRepository widgetRepo, IThingRepository thingRepo, IWhatsitRepository whatsitRepo) { _widgetRepo = widgetRepo; _thingRepo= thingRepo; _whatsitRepo= whatsitRepo; } public void DoStuff() { _widgetRepo.StoreSomeStuff(); _thingRepo.ReadSomeStuff(); _whatsitRepo.SaveSomething(); } public void DoOtherThing() { _widgetRepo.UpdateSomething(); DoStuff(); } } How do I keep my access to that database efficient and not have a constant stream of open-close-open-close on connections and inadvertent invocation of MSDTS and whatnot? If my database is something like SQLite, standard mechanisms like creating nested transactions are going to inherently fail, yet the business layer should not have to be concerning itself with such things. How do you handle such issues? Does ADO.Net provide simple mechanisms to handle this or do most people end up wrapping their own custom bits of code around ADO.Net to solve these types of problems?

    Read the article

  • How can I get swig to wrap a linked list-type structure?

    - by bk
    Here's what I take to be a pretty standard header for a list. Because the struct points to itself, we need this two-part declaration. Call it listicle.h: typedef struct _listicle listicle; struct _listicle{ int i; listicle *next; }; I'm trying to get swig to wrap this, so that the Python user can make use of the listicle struct. Here's what I have in listicle.i right now: %module listicle %{ #include "listicle.h" %} %include listicle.h %rename(listicle) _listicle; %extend listicle { listicle() {return malloc (sizeof(listicle));} } As you can tell by my being here asking, it doesn't work. All the various combinations I've tried each fail in their own special way. [This one: %extend defined for an undeclared class listicle. Change it to %extend _listicle (and fix the constructor) and loading in Python gives type object '_listicle' has no attribute '_listicle_swigregister'. And so on.] Suggestions?

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • Load In and Animate content

    - by crozer
    Hello, I have a little issue concerning an animation-effect which loads a certain div into the body of the site. Let me be more precise: I have a div with the id 'contact': <div id="contact">content</div> The jquery code loads the contents within that div, when I press the link with the id 'ajax_contact': <a href="#" id="ajax_contact">link</a>. The code is working perfectly. However, I want #contact to be HIDDEN when the site loads, i.e. the default state must be non-visible. Only when the user clicks the link #ajax_contact, the div must appear. Please have a look at the jquery code: $(document).ready(function() { var hash = window.location.hash.substr(1); var href = $('#ajax_contact').each(function(){ var href = $(this).attr('href'); if(hash==href.substr(0,href.length-5)){ var toLoad = hash+'.html #contact'; $('#contact').load(toLoad) } }); $('#ajax_contact').click(function(){ var toLoad = $(this).attr('href')+' #contact'; $('#contact').hide('fast',loadContent); $('#load').remove(); $('body').append('<span id="load">LOADING...</span>'); $('#load').fadeIn('normal'); window.location.hash = $(this).attr('href').substr(0,$(this).attr('href').length-5); function loadContent() { $('#contact').load(toLoad,'',showNewContent()) } function showNewContent() { $('#contact').show('normal',hideLoader()); } function hideLoader() { $('#load').fadeOut('normal'); } return false; }); }); I am not sure whether I must change something inside the HTML, but I believe the key is inside the jquery-code. I also tried giving the #contact a CSS style of visible:none, yet this loops and makes the jquery impossible to load the #contact in. I hope I've explained myself well; thank you very much in advance. Chris

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • Is there a way to detect Layout or Display changes in WPF?

    Hello! I am trying to check how fast the Frame control can display a FixedPage object when it is assigned to Frame.Content property. I plan to check the tick count before and after the assignment to the Content property. Example: int starttime = Environment.TickCount; frame1.Content = fixedpage; int endtime = Environment.TickCount; The problem is that the assignment to the Content property might be asynchronous and returns immediately therefore i get a zero amount of time. The rendering of the FixedPage however visually has a lag time from assignment of the Content property up to the point where the FixedPage appears on screen. The Frame.ContentChanged() event is no good either because it gets triggered even before the FixedPage appears on screen so it's not accurate. I'm thinking of detecting the change on the window or control's display instead in order to get the time when the FixedPage is actually displayed on screen. Is there a way to do this in WPF? Thanks!

    Read the article

  • Why can't I get properties from members of this collection?

    - by Lunatik
    I've added some form controls to a collection and can retrieve their properties when I refer to the members by index. However, when I try to use any properties by referencing members of the collection I see a 'Could not set the ControlSource property. Member not found.' error in the Locals window. Here is a simplified version of the code: 'Add controls to collection' For x = 0 To UBound(tabs) activeTabs.Add Item:=Form.MultiPage.Pages(Val(tabs(x, 1))), _ key:=Form.MultiPage.Pages(Val(tabs(x, 1))).Caption Next x 'Check name using collection index' For x = 0 To UBound(tabs) Debug.Print "Tab name from index: " & activeTabs(x + 1).Caption Next x 'Check name using collection members' For Each formTab In activeTabs Debug.Print "Tab name from collection: " & formTab.Caption Next formTab The results in the Immediate window are: Tab name from index: Caption1 Tab name from index: Caption2 Tab name from collection: Tab name from collection: Why does one method work and the other fail? This is in a standard code module, but I have similar code working just fine from within form modules. Could this have anything to do with it?

    Read the article

  • Improve Efficiency in Array comparison in Ruby

    - by user2985025
    Hi I am working on Ruby /cucumber and have an requirement to develop a comparison module/program to compare two files. Below are the requirements The project is a migration project . Data from one application is moved to another Need to compare the data from the existing application against the new ones. Solution : I have developed a comparison engine in Ruby for the above requirement. a) Get the data, de duplicated and sorted from both the DB's b) Put the data in a text file with "||" as delimiter c) Use the key columns (number) that provides a unique record in the db to compare the two files For ex File1 has 1,2,3,4,5,6 and file2 has 1,2,3,4,5,7 and the columns 1,2,3,4,5 are key columns. I use these key columns and compare 6 and 7 which results in a fail. Issue : The major issue we are facing here is if the mismatches are more than 70% for 100,000 records or more the comparison time is large. If the mismatches are less than 40% then comparison time is ok. Diff and Diff -LCS will not work in this case because we need key columns to arrive at accurate data comparison between two applications. Is there any other method to efficiently reduce the time if the mismatches are more thatn 70% for 100,000 records or more. Thanks

    Read the article

  • what is the best way to optimize my json on an asp.net-mvc site

    - by ooo
    i am currently using jqgrid on an asp.net mvc site and we have a pretty slow network (internal application) and it seems to be taking the grid a long time to load (the issue is both network as well as parsing, rendering) I am trying to determine how to minimized what i send over to the client to make it as fast as possible. Here is a simplified view of my controller action to load data into the grid: [AcceptVerbs(HttpVerbs.Get)] public ActionResult GridData1(GridData args) { var paginatedData = applications.GridPaginate(args.page ?? 1, args.rows ?? 10, i => new { i.Id, Name = "<div class='showDescription' id= '" + i.id+ "'>" + i.Name + "</div>", MyValue = GetImageUrl(_map, i.value, "star"), ExternalId = string.Format("<a href=\"{0}\" target=\"_blank\">{1}</a>", Url.Action("Link", "Order", new { id = i.id }), i.Id), i.Target, i.Owner, EndDate = i.EndDate, Updated = "<div class='showView' aitId= '" + i.AitId + "'>" + GetImage(i.EndDateColumn, "star") + "</div>", }) return Json(paginatedData); } So i am building up a json data (i have about 200 records of the above) and sending it back to the GUI to put in the jqgrid. The one thing i can thihk of is Repeated data. In some of the json fields i am appending HTML on top of the raw "data". This is the same HTML on every record. It seems like it would be more efficient if i could just send the data and "append" the HTML around it on the client side. Is this possible? Then i would just be sending the actual data over the wire and have the client side add on the rest of the HTML tags (the divs, etc) be put together. Also, if there are any other suggestions on how i can minimize the size of my messages, that would be great. I guess at some point these solution will increase the client side load but it may be worth it to cut down on network traffic.

    Read the article

  • Ensuring quality of your software and code

    - by Filip Ekberg
    When I usually write code I follow some guidelines to ensure that my code has a certain standard and I as any other developer try to ensure that my code and software is of quality. Try to focus on the programming and not the understanding of the domain or any other pre-programming steps. These are the following steps I live by: Writing unit tests Make it fail ( no code ) Make it Work ( working code ) Analysing abstraction Extracting methods Exteract interfaces Refactoring In addition to the above which is a part of refactoring, I also try to refactor the code with good tools such as ReSharper, CodeRush or others. The question; What is the next step? Commenting the code is trivial and shouldn't even have to be mentioned, but updated comments and xml-comments where it's needed / everywhere is something that I try to have. But all the above helps he ensure that other developers might understand my code, that the code has some sort of quality and follows naming standards. It does however not ensure any product quality. I am looking for tools for post-development quality ensurance, such as profilers and how one would use these tools to increase product quality.

    Read the article

  • How to keep windows from paging block of memory

    - by photo_tom
    We are working on a Vista/Windows 7 applicaiton that will be running in 64 bit mode using VS2008/C++. We will be needing to cache hundreds of 2-3 mb blobs of data in RAM for performance reasons up to some memory limit. Our usage profile is such that we cannot read the data in fast enough if it is all on the the disk. Cached Memory usage will be larger than 1gb memory used. For this to work well, we need to ensure that Windows does not page this memory out as it will defeat the purpose of why we are doing this. I've done a fair amount of research and cannot find documenation that states exactly how to do this. I've seen several references that infer memory mapped files work this way. Is there an expert who can clarify this for me? I'm aware there are other programs that we could adapt to do this, for example, splitting the blobs and loading into memcache or inmemory databases, but they all have too many problems with performance or code complexity. Suggestions?

    Read the article

  • java - question about thread abortion and deadlock - volatile keyword

    - by Tiyoal
    Hello all, I am having some troubles to understand how I have to stop a running thread. I'll try to explain it by example. Assume the following class: public class MyThread extends Thread { protected volatile boolean running = true; public void run() { while (running) { synchronized (someObject) { while (someObject.someCondition() == false && running) { try { someObject.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } // do something useful with someObject } } } public void halt() { running = false; interrupt(); } } Assume the thread is running and the following statement is evaluated to true: while (someObject.someCondition() == false && running) Then, another thread calls MyThread.halt(). Eventhough this function sets 'running' to false (which is a volatile boolean) and interrupts the thread, the following statement is still executed: someObject.wait(); We have a deadlock. The thread will never be halted. Then I came up with this, but I am not sure if it is correct: public class MyThread extends Thread { protected volatile boolean running = true; public void run() { while (running) { synchronized (someObject) { while (someObject.someCondition() == false && running) { try { someObject.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } // do something useful with someObject } } } public void halt() { running = false; synchronized(someObject) { interrupt(); } } } Is this correct? Is this the most common way to do this? This seems like an obvious question, but I fail to come up with a solution. Thanks a lot for your help.

    Read the article

  • Form submits but by passing the form validation when button.click is used

    - by Cuty Pie
    I am using form validation plugin as well as html5 attribute=recquired with input type=button or submit and fancybox.when button is clicked ,both validation proceedures work(notification of empty inputs appeare),but instead of this notification ,form is submitted. If I dont use button.click function but use simple submit.then validation works perfectly plugin==http://validatious.org/ javascript function $(document).ready(function() { $('#send').click(function() { $(this).attr("value", 'Applying...'); var id = getParam('ID'); $.ajax({ type:'POST', url:"send.php", data:{option:'catapply', tittle:$('#tittle').val(), aud:aud, con:con, adv:adv, mid:$('#mid').val(), aud2:aud, cid:$('#cid').val(), scid:$('#scname option[selected="saa"]').val(), gtt:$('input[name=gt]:radio:checked').val(), sr:$(":input").serialize() + '&aud=' + aud}, success:function(jd) { $('#' + id).fadeOut("fast", function() { $(this).before("<strong>Success! Your form has been submitted, thanks :)</strong>"); setTimeout("$.fancybox.close()", 1000); }); } }); }); }); AND Html is here <form id="form2" class="validate" style="display: none" > <input type='submit' value='Apply' id="send" class="sendaf validate_any" name="jobforming"/>

    Read the article

  • How long is the time frame between context switches on Windows?

    - by mattcodes
    Reading CLR via C# 2.0 (I dont have 3.0 with me at the moment) Is this still the case: If there is only one CPU in a computer, only one thread can run at any one time. Windows has to keep track of the thread objects, and every so often, Windows has to decide which thread to schedule next to go to the CPU. This is additional code that has to execute once every 20 milliseconds or so. When Windows makes a CPU stop executing one thread's code and start executing another thread's code, we call this a context switch. A context switch is fairly expensive because the operating system has to: So circa CLR via C# 2.0 lets say we are on Pentium 4 2.4ghz 1 core non-HT, XP. Every 20 milliseconds? Where a CLR thread or Java thread is mapped to an OS thread only a maximum of 50 threads per second may get a chance to to run? I've read that context switching is very fast in mircoseconds here on SO, but how often roughly (magnitude style guesses) will say a modest 5 year old server Windows 2003 Pentium Xeon single core give the OS the opportunity to context switch? 20ms in the right area? I dont need exact figures I just want to be sure that's in the right area, seems rather long to me.

    Read the article

  • Best way reading from dirty excel sheets

    - by Ten Ton Gorilla
    I have to manipulate some Excel documents with C#. It's a batch process with no user interaction. It's going to parse data into a database, then output nice reports. The data is very dirty and cannot be ready using ADO. The data is nowhere near a nice table format. Best is defined as the most stable(updates less likely to break)/ clear(succinct) code. Fast doesn't matter. If it runs in less than 8 hours I'm fine. I have the logic to find the data worked out. All I need to make it run is basic cell navigation and getvalue type functions. Give me X cell value as string, if it matches Y value with levenshtein distance < 3, then give me Z cell value. My question is, what is the best way to dig into the excel? VSTO? Excel Objects Library? Third Option I'm not aware of?

    Read the article

  • How to make command-line options mandatory with GLib?

    - by ahe
    I use GLib to parse some command-line options. The problem is that I want to make two of those options mandatory so that the program terminates with the help screen if the user omits them. My code looks like this: static gint line = -1; static gint column = -1; static GOptionEntry options[] = { {"line", 'l', 0, G_OPTION_ARG_INT, &line, "The line", "L"}, {"column", 'c', 0, G_OPTION_ARG_INT, &column, "The column", "C"}, {NULL} }; ... int main(int argc, char** argv) { GError *error = NULL; GOptionContext *context; context = g_option_context_new ("- test"); g_option_context_add_main_entries (context, options, NULL); if (!g_option_context_parse(context, &argc, &argv, &error)) { usage(error->message, context); } ... return 0; } If I omit one of those parameters or both on the command-line g_option_context_parse() still succeeds and the values in question (line and or column) are still -1. How can I tell GLib to fail parsing if the user doesn't pass both options on the command-line? Maybe I'm just blind but I couldn't find a flag I can put into my GOptionEntry data structure to tell it to make those fields mandatory. Of course I could check if one of those variables is still -1 but then the user could just have passed this value on the command-line and I want to print a separate error message if the values are out of range.

    Read the article

  • PHP Check slave status without mysql_connect timeout issues

    - by Jonathon
    I have a web-app that has a master mysql db and four slave dbs. I want to handle all (or almost all) read-only (SELECT) queries from the slaves. Our load-balancer sends the user to one of the slave machines automatically, since they are also running Apache/PHP and serving webpages. I am using an include file to setup the connection to the databases, such as: //for master server (i.e. - UPDATE/INSERT/DELETE statements) $Host = "10.0.0.x"; $User = "xx"; $Password = "xx"; $Link = mysql_connect( $Host, $User, $Password ); if( !$Link ) ) { die( "Master database is currently unavailable. Please try again later." ); } //this connection can be used for READ-ONLY (i.e. - SELECT statements) on the localhost $Host_Local = "localhost"; $User_Local = "xx"; $Password_Local = "xx"; $Link_Local = mysql_connect( $Host_Local, $User_Local, $Password_Local ); //fail back to master if slave db is down if( !$Link_Local ) ) { $Link_Local = mysql_connect( $Host, $User, $Password ); } I then use $Link for all update queries and $Link_Local as the connection for SELECT statements. Everything works fine until the slave server database goes down. If the local db is down, the $Link_Local = mysql_connect() call takes at least 30 seconds before it gives up on trying to connect to the localhost and returns back to the script. This causes a huge backlog of page serves and basically shuts down the system (due to the extremely slow response time). Does anyone know of a better way to handle connections to slave servers via PHP? Or, is there some kind of timeout function that could be used to stop the mysql_connect call after 2-3 seconds? Thanks for the help. I searched the other mysql_connect threads, but didn't see any that addressed this issue.

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >