Search Results

Search found 10548 results on 422 pages for 'standard deviation'.

Page 234/422 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Is there a format or service for resume/CV data?

    - by Ben Dauphinee
    I have noticed through the process of signing up for various freelance and job seeking or professional network sites that they all want your resume/CV data. And I am really getting tired of copy/pasting this data, especially since I have a website. Is there a standard format or service somewhere that I do not know about for this data? If not, does anyone want to help me build something like this out? I'm thinking a service similar to OpenID that allows you to maintain a central resume to have your data pulled from. No more filling in the same data over and over, and having to maintain the copies on any of the plethora of websites that have that data. Takers?

    Read the article

  • c# gridview row click

    - by Martijn
    When i click on a row in my gridview, i want to go to a other page with the id i get from the database. In my RowCreated event i have the following line: e.Row.Attributes.Add("onClick", ClientScript.GetPostBackClientHyperlink(this.grdSearchResults, "Select$" + e.Row.RowIndex)); To prevent error messages i have this code: protected override void Render(HtmlTextWriter writer) { // .NET will refuse to accept "unknown" postbacks for security reasons. Because of this we have to register all possible callbacks // This must be done in Render, hence the override for (int i = 0; i < grdSearchResults.Rows.Count; i++) { Page.ClientScript.RegisterForEventValidation(new System.Web.UI.PostBackOptions(grdSearchResults, "Select$" + i.ToString())); } // Do the standard rendering stuff base.Render(writer); } My question is, how can i give a row a unique id (from the DB) and when i click the row, another page is opened (like clicking on a href) and that page can read the id. Thnx

    Read the article

  • Sharing Bandwidth and Prioritizing Realtime Traffic via HTB, Which Scenario Works Better?

    - by Mecki
    I would like to add some kind of traffic management to our Internet line. After reading a lot of documentation, I think HFSC is too complicated for me (I don't understand all the curves stuff, I'm afraid I will never get it right), CBQ is not recommend, and basically HTB is the way to go for most people. Our internal network has three "segments" and I'd like to share bandwidth more or less equally between those (at least in the beginning). Further I must prioritize traffic according to at least three kinds of traffic (realtime traffic, standard traffic, and bulk traffic). The bandwidth sharing is not as important as the fact that realtime traffic should always be treated as premium traffic whenever possible, but of course no other traffic class may starve either. The question is, what makes more sense and also guarantees better realtime throughput: Creating one class per segment, each having the same rate (priority doesn't matter for classes that are no leaves according to HTB developer) and each of these classes has three sub-classes (leaves) for the 3 priority levels (with different priorities and different rates). Having one class per priority level on top, each having a different rate (again priority won't matter) and each having 3 sub-classes, one per segment, whereas all 3 in the realtime class have highest prio, lowest prio in the bulk class, and so on. I'll try to make this more clear with the following ASCII art image: Case 1: root --+--> Segment A | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment B | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment C +--> High Prio +--> Normal Prio +--> Low Prio Case 2: root --+--> High Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Normal Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Low Prio +--> Segment A +--> Segment B +--> Segment C Case 1 Seems like the way most people would do it, but unless I don't read the HTB implementation details correctly, Case 2 may offer better prioritizing. The HTB manual says, that if a class has hit its rate, it may borrow from its parent and when borrowing, classes with higher priority always get bandwidth offered first. However, it also says that classes having bandwidth available on a lower tree-level are always preferred to those on a higher tree level, regardless of priority. Let's assume the following situation: Segment C is not sending any traffic. Segment A is only sending realtime traffic, as fast as it can (enough to saturate the link alone) and Segment B is only sending bulk traffic, as fast as it can (again, enough to saturate the full link alone). What will happen? Case 1: Segment A-High Prio and Segment B-Low Prio both have packets to send, since A-High Prio has the higher priority, it will always be scheduled first, till it hits its rate. Now it tries to borrow from Segment A, but since Segment A is on a higher level and Segment B-Low Prio has not yet hit its rate, this class is now served first, till it also hits the rate and wants to borrow from Segment B. Once both have hit their rates, both are on the same level again and now Segment A-High Prio is going to win again, until it hits the rate of Segment A. Now it tries to borrow from root (which has plenty of traffic spare, as Segment C is not using any of its guaranteed traffic), but again, it has to wait for Segment B-Low Prio to also reach the root level. Once that happens, priority is taken into account again and this time Segment A-High Prio will get all the bandwidth left over from Segment C. Case 2: High Prio-Segment A and Low Prio-Segment B both have packets to send, again High Prio-Segment A is going to win as it has the higher priority. Once it hits its rate, it tries to borrow from High Prio, which has bandwidth spare, but being on a higher level, it has to wait for Low Prio-Segment B again to also hit its rate. Once both have hit their rate and both have to borrow, High Prio-Segment A will win again until it hits the rate of the High Prio class. Once that happens, it tries to borrow from root, which has again plenty of bandwidth left (all bandwidth of Normal Prio is unused at the moment), but it has to wait again until Low Prio-Segment B hits the rate limit of the Low Prio class and also tries to borrow from root. Finally both classes try to borrow from root, priority is taken into account, and High Prio-Segment A gets all bandwidth root has left over. Both cases seem sub-optimal, as either way realtime traffic sometimes has to wait for bulk traffic, even though there is plenty of bandwidth left it could borrow. However, in case 2 it seems like the realtime traffic has to wait less than in case 1, since it only has to wait till the bulk traffic rate is hit, which is most likely less than the rate of a whole segment (and in case 1 that is the rate it has to wait for). Or am I totally wrong here? I thought about even simpler setups, using a priority qdisc. But priority queues have the big problem that they cause starvation if they are not somehow limited. Starvation is not acceptable. Of course one can put a TBF (Token Bucket Filter) into each priority class to limit the rate and thus avoid starvation, but when doing so, a single priority class cannot saturate the link on its own any longer, even if all other priority classes are empty, the TBF will prevent that from happening. And this is also sub-optimal, since why wouldn't a class get 100% of the line's bandwidth if no other class needs any of it at the moment? Any comments or ideas regarding this setup? It seems so hard to do using standard tc qdiscs. As a programmer it was such an easy task if I could simply write my own scheduler (which I'm not allowed to do).

    Read the article

  • Using a javax.servlet.Filter with Compojure

    - by mikera
    I'm trying to build a simple web site using Clojure / Compojure and want to feed apply a servlet filter to the request / response (i.e. a standard javax.servlet.Filter instance). e.g. if the current source code is: (defroutes my-app (GET "/*" (html [:h1 "Hello Foo!!"])) ) I would like to add a filter like this: (defroutes my-app (GET "/*" (FILTER my-filter-name (html [:h1 "Hello Foo!!"]))) ) Where my-filter-name is some arbitrary instance of javax.servlet.Filter. Any idea how to do this effectively and elegantly?

    Read the article

  • Indy FTP, large files and NAT routers

    - by Lobuno
    Hello! I have been using Indy to transfers files via FTP for years now but have not been able to find a satisfactory solution for the following problem. When a user is uploading a large file, behind a router, sometimes the following happens: the file is uploaded OK, but under the mean time the command channel gets disconnected because of a timeout. Normally this doesn't happens with a direct connection to the server, because the server "knows" that a transfer is being taking place on the data channel. Some routers are not aware of this, though and the command channel is closed. Many programs send a NOOP command periodically to keep the command channel alive even if this is not part of the standard FTP specification. My question: how do I do that? Do I send the NOOP command in the OnWork event? Does this cause any collateral damage in some way, like, do I need to process some response? How do I best solve this problem?

    Read the article

  • py2app prescripts

    - by yoav.aner
    The py2app documentation mentions prescripts, being run by __boot__.py prior to the main python script. I couldn't find a way to easily specify any prescript on the setup.py file or build process. I did however manage to 'hack' __boot__.py manually and add another _run(prescript) command before my main _run(main_script) and it seemed to work fine. It would however be much better using the standard py2app build process. What I'm essentially trying to do is monkey-patch my sites-packages.zip file prior to the main script being launched. The prescript essentially checks for updates on the server, and if there are any, downloads them, and then overwrites the site-packages.zip file. Much quicker than having to re-install the application from scratch. Any ideas?

    Read the article

  • Long-running transactions structured approach

    - by disown
    I'm looking for a structured approach to long-running (hours or more) transactions. As mentioned here, these type of interactions are usually handled by optimistic locking and manual merge strategies. It would be very handy to have some more structured approach to this type of problem using standard transactions. Various long-running interactions such as user registration, order confirmation etc. all have transaction-like semantics, and it is both error-prone and tedious to invent your own fragile manual roll-back and/or time-out/clean-up strategies. Taking a RDBMS as an example, I realize that it would be a major performance cost associated with keeping all the transactions open. As an alternative, I could imagine having a database supporting two isolation levels/strategies simultaneously, one for short-running and one for long-running conversations. Long-running conversations could then for instance have more strict limitations on data access to facilitate them taking more time (read-only semantics on some data, optimistic locking semantics etc). Are there any solutions which could do something similar?

    Read the article

  • Clear fields on CreateUserWizard, Login control

    - by Midhat
    I have a createuserwizard and a login control on a page. both of them are customized (standard textboxes are replaced by RadTextBoxes) When i enter a value in the form and refresh the browser without submitting, the forms retain their values. Is there any way i can clear these fields on refresh. I have tried settinf EnableViewState false on the controls (as seen somewhere on the web) but it doesnt work I have added code in page load to clear the fields if the page !IsPostBack. it looks something like this if (!IsPostBack) { ((RadTextBox)Login1.FindControl("Username")).Text=""; ((RadTextBox)Login1.FindControl("Password")).Text = ""; ((RadTextBox)CreateUserWizard1.CreateUserStep.ContentTemplateContainer.FindControl("Username")).Text = ""; ((RadTextBox)CreateUserWizard1.CreateUserStep.ContentTemplateContainer.FindControl("Password")).Text = ""; ((RadTextBox)CreateUserWizard1.CreateUserStep.ContentTemplateContainer.FindControl("confirmPassword")).Text = ""; ((RadTextBox)CreateUserWizard1.CreateUserStep.ContentTemplateContainer.FindControl("Email")).Text = ""; } Still of no avail Any suggestions

    Read the article

  • Breakpoints are ignored when debugging in Visual Studio 2008 on 64-bit system

    - by Arnold Zokas
    I'm trying to debug an ASP.NET web application in this environment: Windows Server Standard 2008 SP2 x64 Single-core CPU 4GB RAM Visual Studio 2008 with Remote Debugger SP1 .NET 3.5 Web Application running in IIS in 64-bit mode The code I am trying to debug is a simple event handler with some basic sequential code. What I observe is that breakpoints get randomly ignored and VS often exits debug mode when I try to step through the code. The code is compiled in debug mode. I've tried all the usual steps: iisreset, reboot VM, rebuild solution, reinstall Visual Studio. Any ideas?

    Read the article

  • Decode sparse json array to php array

    - by Isaac Sutherland
    I can create a sparse php array (or map) using the command: $myarray = array(10=>'hi','test20'=>'howdy'); I want to serialize/deserialize this as JSON. I can serialize it using the command: $json = json_encode($myarray); which results in the string {"10":"hi","test20":"howdy"}. However, when I deserialize this and cast it to an array using the command: $mynewarray = (array)json_decode($json); I seem to lose any mappings with keys which were not valid php identifiers. That is, mynewarray has mapping 'test20'=>'howdy', but not 10=>'hi' nor '10'=>'hi'. Is there a way to preserve the numerical keys in a php map when converting to and back from json using the standard json_encode / json_decode functions?

    Read the article

  • Microsoft Sync Framework - How to reprovision a table (or entire scope) fater schema changes?

    - by Rabbi
    B"H I have already setup Syncing with Microsoft Sync Framework, and now I need to add fields to a table. How do I re-provision the databases? The setup is exceedingly simple: Two sql express 2008 servers The scope includes the entire database Using Microsoft Sync Framework 2.0 Synchronizing by direct access. Using the standard new SqlSyncProvider Do I make the structural changes at both ends? Or do I only change one Server and let Sync Framework somehow propagate the change? Do I need to delete the _tracking tables and/or the stored procedures? How about the triggers? Has anyone been using the Sync Framework? Please help.

    Read the article

  • iPhone – Best method to import/drawing UI graphic elements? CGContextDrawPDFPage?

    - by Ross
    Hello, What is the best way to use the custom UI graphics on the iPhone? I've come across CGContextDrawPDFPage and Panic's Shrinkit. Should I be using storing my vector ui graphics as PDF's and loading them using CGContextDrawPDFPage to draw them. I did previously asked what way Apple store their UI graphics and was answered crushed png. The options as I see it, but I would really want to know what technique other people use. This question is for vector graphics only. Looking for what is standard / most effective / most efficient. PNG (bitmapped image) Custom UIView drawing code (generated from Opacity) PDF (I've not used this method, is it with CGContextDrawPDFPage?) Many thanks Ross

    Read the article

  • Refactoring a custom User model to user UserProfile: Should I create a custom UserManager or add use

    - by BryanWheelock
    I have been refactoring an app that had customized the standard User model from django.contrib.auth.models by creating a UserProfile and defining it with AUTH_PROFILE_MODULE. The problem is the attributes in UserProfile are used throughout the project to determine the User sees. I had been creating tests and putting in this type of statement repeatedly: user = User.objects.get(pk=1) user_profile = user.get_profile() if user_profile.karma > 10: do_some_stuff() This is tedious and I'm now wondering if I'm violating the DRY principle. Would it make more sense to create a custom UserManager that automatically loads the UserProfile data when the user is requested. I could even iterate over the UserProfile attributes and append them to the User model. This would save me having to update all the references to the custom model attributes that litter the code. Of course, I'd have to reverse to process for to allow the User and UserProfile models to be updated correctly. Which approach is more Django-esque?

    Read the article

  • Objective C message dispatch mechanism

    - by Dolphin
    I am just staring to play around with Objective C (writing toy iPhone apps) and I am curious about the underlying mechanism used to dispatch messages. I have a good understanding of how virtual functions in C++ are generally implemented and what the costs are relative to a static or non-virtual method call, but I don't have any background with Obj-C to know how messages are sent. Browsing around I found this loose benchmark and it mentions IMP cached messages being faster than virtual function calls, which are in turn faster than a standard message send. I am not trying to optimize anything, just get deeper understanding of how exactly the messages get dispatched. How are Obj-C messages dispatched? How do Instance Method Pointers get cached and can you (in general) tell by reading the code if a message will get cached? Are class methods essentially the same as a C function (or static class method in C++), or is there something more to them? I know some of these questions may be 'implementation dependent' but there is only one implementation that really counts.

    Read the article

  • Remote debugging in Windows Embedded

    - by Ben Schoepke
    Hi, I'm moving from Windows CE 6 to Windows Embedded Standard 7 for a project and am wondering how remote debugging of .Net apps works with Windows Embedded target devices. In CE with VS2008 and ActiveSync (USB), I can hit F5 and my app is automatically deployed to the target device and executed so I can step through my breakpoints just like I would if I were debugging locally. Is there an equivalent remote debugging solution for Windows Embedded debugging? A quick glance through the Visual Studio "Remote Debugger" documentation makes the whole thing seem a lot clunkier/less integrated. Is there an easy way to debug applications on target devices running Windows Embedded like I would with CE? Thanks, Ben

    Read the article

  • Bourne Script: Redirect success messages but NOT error messages

    - by sixtyfootersdude
    This command: keytool -import -file "$serverPath/$serverCer" -alias "$clientTrustedCerAlias" -keystore "$clientPath/$clientKeystore" -storepass "$serverPassword" -noprompt Will when it runs successfully outputs: Certificate was added to keystore I tried redirecting the stdard out with: keytool ... > /dev/null But it is still printing. It appears that the message is being output into standard error. Since when I do this it is not displayed: keytool ... > /dev/null 2>&1 However this is not what I am wanting to do. I would like error messages to be output normally but I do not want "success" messages to be output to the command line. Any ideas? Whatever happened to unix convention: "If it works do not output anything".

    Read the article

  • Changing data in a django modelform

    - by Matt Hampel
    I get data in from POST and validate it via this standard snippet: entry_formset = EntryFormSet(request.POST, request.FILES, prefix='entries') if entry_formset.is_valid(): .... The EntryFormSet modelform overrides a foreign key field widget to present a text field. That way, the user can enter an existing key (suggested via an Ajax live search), or enter a new key, which will be seamlessly added. I use this try-except block to test if the object exists already, and if it doesn't, I add it. entity_name = request.POST['entries-0-entity'] try: entity = Entity.objects.get(name=entity_name) except Entity.DoesNotExist: entity = Entity(name=entity_name) entity.slug = slugify(entity.name) entity.save() However, I now need to get that entity back into the entry_formset. It thinks that entries-0-entity is a string (that's how it came in); how can I directly access that value of the entry_formset and get it to take the object reference instead?

    Read the article

  • What's a good Minimal Server-Side Javascript Framework?

    - by Nick Retallack
    So I was writing a web app with web.py that uses plenty of client-side javascript, and my database is on couchdb so the queries are in javascript too, and eventually I just got to thinking, why not skip the python and go all javascript? Besides, some functions need to run once on the client and again on the server to make sure you're not spoofing, so why translate between javascript and python? So I'm looking for a simple lightweight javascript web framework. All I really need is the url routing, request and response stuff (standard wsgi?), and a way to hook into a big http server like nginx. What do you guys recommend?

    Read the article

  • Nice shadow effect in text (UISegmentedControl)

    - by Matthias
    Hi, I would like to bring some color to the texts of my UISegmentedControl. So, I've searched a bit about this topic, but it seems to be not possible out-of-the-box. But I found this nice blog post (link text), how to build an image out of a custom text and then assign it to the segemented control. Works fine, but the text in these created images do not have this nice little shadow effect as the original ones. So, does anyone know, how to create such a shadow effect? I guess, Apple does the same (building an image for the text) with the standard segmenented control. Thanks for your help. Regards Matthias

    Read the article

  • Enable Session state in sharePoint 2010

    - by Albert D. Kallal
    I setup a test box computer with server 2008 (standard edition, not R2 and not hyper-v editing). I then installed SharePoint 2010. I was amazed how easy the whole setup went (the prerequisites setup on the SharePoint disk made this process oh so easy – great install system). Really this was just so easy. This test box is being used for testing Access web services. I am able to well publish access applications to this test server and Access applications publish and run just fine on the web SharePoint site through an web browser. However, the only thing that does not work is when I launch a Access report. The error message I get back is This report failed to load because session state is not turned on. Here is a screen shot: I can’t seem to find the setting anywhere to turn session state on. Any hints or links on how to enable session state in SharePoint 2010 would be most appreciated.

    Read the article

  • Library to parse ERB files

    - by Douglas Sellers
    I am attempting to parse, not evaluate, rails ERB files in a Hpricot/Nokogiri type manner. The files I am attempting to parse contain HTML fragments intermixed with dynamic content generated using ERB (standard rails view files) I am looking for a library that will not only parse the surrounding content, much the way that Hpricot or Nokogiri will but will also treat the ERB symbols, <%, <%= etc, as though they were html/xml tags. Ideally I would get back a DOM like structure where the <%, <%= etc symbols would be included as their own node types. I know that it is possible to hack something together using regular expressions but I was looking for something a bit more reliable as I am developing a tool that I need to run on a very large view code base where both the html content and the erb content are important. For example, content such as: blah blah blah <divMy Great Text <%= my_dynamic_expression %</div Would return a tree structure like: root - text_node (blah blah blah) - element (div) - text_node (My Great Text ) - erb_node (<%=)

    Read the article

  • What are the Build Settings for a Universal iPhone and iPad Application

    - by McPragma
    What would be the settings to build and have accepted an universal app for both the iPad and iPhone? That is, what would be the settings in Project X Info Build for Architectures [ARCHS] ("Standard(armv6)" or "Optimized(armv6 armv7)" or other) Valid Architectures [VALID_ARCHS] ("armv6 armv7" or other) Build Active Architecture Only (checked or unchecked) and what would we select in the dropdown (Device - 3.2|Distribution|armv?) ? We're either getting warnings at build time that we haven't built for armv7, or the app upload is rejected on account of "The binary you uploaded was invalid. When supporting iPhone, the executable must include support for the armv6 architecture". Thank you for your help.

    Read the article

  • SQlite: Column format for unix timestamp; Integer types

    - by SF.
    Original problem: What is the right column format for a unix timestamp? The net is full of confusion: some posts claim SQLite has no unsigned types - either whatsoever, or with exception of the 64bit int type (but there are (counter-)examples that invoke UNSIGNED INTEGER). The data types page mentions it only in a bigint example. It also claims there is a 6-byte integer but doesn't give a name for it. Of course standard INTEGER being 4-byte signed signed stores unix timestamps as negative numbers. I've heard that some systems return 64-bit timestamps too. OTOH I'm not too fond of wasting 4 bytes to store 1 extra bit (top bit of timestamp), and even if I have to pick a bigger data format, I'd rather go for the 6-byte one. I've even seen a post that claims SQLite unix timestamp is of type REAL... Complete problem: Could someone please clarify that mess?

    Read the article

  • VS2010 / Code Analysis: Turn off a rule for a project without custom ruleset....

    - by TomTom
    ...any change? The scenario is this: For our company we develop a standard how code should look. This will be the MS full rule set as it looks now. For some specific projects we may want to turn off specific rules. Simply because for a specific project this is a "known exception". Example? CA1026 - while perfectly ok in most cases, there are 1-2 specific libraries we dont want to change those. We also want to avoid having a custom rule set. OTOH putting in a suppress attribute on every occurance gets pretty convoluted pretty fast. Any way to turn off a code analysis warning for a complete assembly without a custom rule set? We rather have that in a specific file (GlobalSuppressions.cs) than in a rule set for maintenance reasons, and to be more explicit ;)

    Read the article

  • Testing sample code in python modules

    - by Andrew Walker
    I'm in the process of writing a python module that includes some samples. These samples aren't unit-tests, and they are too long and complex to be doctests. I'm interested in best practices for automatically checking that these samples run. My current project layout is pretty standard, except that there is an extra top level makefile that has build, install, unittest, coverage and profile targets, that delegate responsibility to setup.py and nose as required. projectname/ Makefile README setup.py samples/ foo-sample foobar-sample projectname/ __init__.py foo.py bar.py tests/ test-foo.py test-bar.py I've considered adding a sampletest module, or adding nose.tools.istest decorators to the entry-point functions of the samples, but for a small number of samples, these solutions sound a bit ugly. This question is similar to http://stackoverflow.com/questions/301365/automatically-unit-test-example-code, but I assume python best practices will differ from C#

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >