Search Results

Search found 35795 results on 1432 pages for 'upgrade best practice'.

Page 270/1432 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • Best way to associate data files with particular tests in RSpec / Ruby

    - by Bill T
    For my RSpec tests I would to automatically associate data files with each test. To clarify, if my tests each require an xml file as input data and then some xpath statements to validate the responses they get back I would like to externalize the xml and xpath as files and have the testing framework easily associate them with the particular test being run by using the unique ID of the test as the file(s) name. I tried to get this behavior but the solution isn't very clean. I wrote a helper method that takes the value of "description" and combines it with FILE to create a unique identifier which is set into a global variable that other utilities can access. The unique identifier is used to associate the data files I need. I have to call this helper method as the first line of every test, which is ugly. If I have an RSpec example that looks like this: describe "Basic functions of this server I'm testing" do it "should give me back a response" do # Sets a global var to: "my_tests_spec.rb_should_give_me_back_a_response" TestHelper::who_am_i __FILE__, description ... end end Is there some better/cleaner/slicker way I can get an unique ID for each test that I could use to associate data files with? Perhaps something build into RSpec I'm unaware of? Thank you, -Bill

    Read the article

  • Best means to store data locally when offline

    - by mickartz
    I am in the midst of writing a small program (more to experiment with vs 2010 than anything else) Despite being an experiment it has some practical use for our local athletics club. My thought was to access the DB (currently online) to download the current members and store locally on a laptop (this is a MS sql table, used to power the club's website). take the laptop to the event (yes there ARE places that don't have internet coverage), add members to that days race (also a row from a sql table (though no changes would be made to this), record results (new records in 3rd table) Once home, showered and within internet access again, upload/edit the tables as per the race results/member changes etc. So I was thinking i'd do something like write xml files locally with the data, including a field to indicate changes etc? If anyone can point me in a direction i would appreciate it...hell if anyone could tell me if this has a name, I'd appreciate it.

    Read the article

  • Best indexing strategy for several varchar columns in Postgres

    - by Corey
    I have a table with 10 columns that need to be searchable (the table itself has about 20 columns). So the user will enter query criteria for at least one of the columns but possibly all ten. All non-empty criteria is then put into an AND condition Suppose the user provided non-empty criteria for column1 and column4 and column8 the query would be: select * from the_table where column1 like '%column1_query%' and column4 like '%column4_query%' and column8 like '%column8_query%' So my question is: am I better off creating 1 index with 10 columns? 10 indexes with 1 column each? Or do I need to find out what sets of columns are queried together frequently and create indexes for them (an index on cols 1,4 and 8 in the case above). If my understanding is correct a single index of 10 columns would only work effectively if all 10 columns are in the condition. Open to any suggestions here, additionally the rowcount of the table is only expected to be around 20-30K rows but I want to make sure any and all searches on the table are fast. Thanks!

    Read the article

  • Best C++ development environment in Linux

    - by Bruce
    I have some experience with Eclipse and Qt creator and am somewhat disappointed in their debuggers, less so in their editors. On Windows, I like Visual Studio for debugging and SlickEdit for editing (SE is also available on Linux). Is there an IDE that is somehow better than the two mentioned?

    Read the article

  • what is the best way to optimize my json on an asp.net-mvc site

    - by ooo
    i am currently using jqgrid on an asp.net mvc site and we have a pretty slow network (internal application) and it seems to be taking the grid a long time to load (the issue is both network as well as parsing, rendering) I am trying to determine how to minimized what i send over to the client to make it as fast as possible. Here is a simplified view of my controller action to load data into the grid: [AcceptVerbs(HttpVerbs.Get)] public ActionResult GridData1(GridData args) { var paginatedData = applications.GridPaginate(args.page ?? 1, args.rows ?? 10, i => new { i.Id, Name = "<div class='showDescription' id= '" + i.id+ "'>" + i.Name + "</div>", MyValue = GetImageUrl(_map, i.value, "star"), ExternalId = string.Format("<a href=\"{0}\" target=\"_blank\">{1}</a>", Url.Action("Link", "Order", new { id = i.id }), i.Id), i.Target, i.Owner, EndDate = i.EndDate, Updated = "<div class='showView' aitId= '" + i.AitId + "'>" + GetImage(i.EndDateColumn, "star") + "</div>", }) return Json(paginatedData); } So i am building up a json data (i have about 200 records of the above) and sending it back to the GUI to put in the jqgrid. The one thing i can thihk of is Repeated data. In some of the json fields i am appending HTML on top of the raw "data". This is the same HTML on every record. It seems like it would be more efficient if i could just send the data and "append" the HTML around it on the client side. Is this possible? Then i would just be sending the actual data over the wire and have the client side add on the rest of the HTML tags (the divs, etc) be put together. Also, if there are any other suggestions on how i can minimize the size of my messages, that would be great. I guess at some point these solution will increase the client side load but it may be worth it to cut down on network traffic.

    Read the article

  • Best way to correct garbled data caused by false encoding

    - by ercan
    Hi all, I have a set of data that contains garbled text fields because of encoding errors during many import/exports from one database to another. Most of the errors were caused by converting UTF-8 to ISO-8859-1. Strangely enough, the errors are not consistent: the word 'München' appears as 'München' in some place and as 'MÃœnchen'. Is there a trick in SQL server to correct this kind of crap? The first thing that I can think of is to exploit the COLLATE clause, so that ü is interpreted as ü, but I don't exactly know how. If it isn't possible to make it in the DB level, do you know any tool that helps for a bulk correction? (no manual find/replace tool, but a tool that guesses the garbled text somehow and correct them)

    Read the article

  • Excel - Best Way to Connect With Access Data

    - by gamerzfuse
    Hello there, Here is the situation we have: a) I have an Access database / application that records a significant amount of data. Significant fields would be hours, # of sales, # of unreturned calls, etc b) I have an Excel document that connects to the Access database and pulls data in to visualize it As it stands now, the Excel file has a Refresh button that loads new data. The data is loaded into a large PivotTable. The main 'visual form' then uses VLOOKUP to get the results from the form, based on the related hours. This operation is slow (~10 seconds) and seems to be redundant and inefficient. Is there a better way to do this? I am willing to go just about any route - just need directions. Thanks in advance! Update: I have confirmed (due to helpful comments/responses) that the problem is with the data loading itself. removing all the VLOOKUPs only took a second or two out of the load time. So, the questions stands as how I can rapidly and reliably get the data without so much time involvement (it loads around 3000 records into the PivotTables).

    Read the article

  • Best wrapper for simultaneous API requests?

    - by bluebit
    I am looking for the easiest, simplest way to access web APIs that return either JSON or XML, with concurrent requests. For example, I would like to call the twitter search API and return 5 pages of results at the same time (5 requests). The results should ideally be integrated and returned in one array of hashes. I have about 15 APIs that I will be using, and already have code to access them individually (using simple a NET HTTP request) and parse them, but I need to make these requests concurrent in the easiest way possible. Additionally, any error handling for JSON/XML parsing is a bonus.

    Read the article

  • Best way to implement symfony admin components

    - by Chris T
    I am coding a backend in symfony using the sfThemePlugin (part of sympal). The dashboard should allow for new "admin plugins" to be added fairly easily. What I'd like is to have a config.yml config like this: sf_easy_admin_plugin: enabled_admin_dashboard_plugins: [Twitter, QuickBlogPost, QuickConfig] and when these are set it includes the correct components into the template. I'd like to have each one be in it's own plugin (sfTwitterEasyAdminModule, sfQuickBlogPostEasyAdminModule) or have them all bundled in one (sfEasyAdminModules). Is there anyway to accomplish this? As far I know symfonys include_component() only let's you include components from the current module and not from other plugins. Each "component" or admin plugin should render an icon for the dashboard and a html form that will be hidden until the user clicks the icon.

    Read the article

  • Best place to store large amounts of session data

    - by audiopleb
    I'm building an application that needs to store and re-use large amounts of data per session. So for example, the user selects a large list of list items (say 2000 or significantly more) which have a numeric value as their key then they save that selection and go off to another page, do something else and then come back to the original page and need to load their selections into that page. What is the quickest and most efficient way of storing and reusing that data? In a text file saved with the session id? In a temp db table? In the session data itself (db sessions so size isn't a limit) using a serialised string or using gzcompress or gzencode? Any advice or insight would be great! Thank you!!!!

    Read the article

  • Best Format for a Software Engineer's Resume

    - by Adam Haile
    I am looking for good, objective ideas and examples of a resume for a Software Engineer. By all means, post a link to your own resume if you are comfortable with doing so. Mostly I am looking at how it should be formatted and what kind of information should be included (and in what order on the resume.)

    Read the article

  • Spring+JSP url building best practices

    - by dotsid
    I wonder if there are any good practices for addressing Spring controllers in JSP. Suppose I have controller: @Controller class FooController { // Don't bother about semantic of this query right now @RequestMapping("/search/{applicationId}") public String handleSearch(@PathVariable String applicationId) { [...] } } Of course in JSP I can write: <c:url value="/search/${application.id}" /> But it's very hard to change url then. If you familiar with Rails/Grails then you now how this problem resolved: redirect_to(:controller => 'foo', :action = 'search') But in Spring there is so much UrlMappers. Each UrlMapper have own semantic and binding scheme. Rails alike scheme simply doesn't work (unless you implement it yourself). And my question is: are there any more convenient ways to address controller from JSP in Spring?

    Read the article

  • Best way to simulate a domain?

    - by John Isaacks
    I am going to build a website on a test server that will behave differently depending on which domain is used to access it (The real website will have multiple domains pointing to it). But how will I be able to simulate the different domains on the test server?

    Read the article

  • CSS - Best way to do a border like this (with link to a website as example)

    - by markzzz
    Hi to everybody. I need to do a border for my website that looks like this one. The only way I know is to split the website with 9 div, such : 1 2 3 4 5 6 7 8 9 and create 8 images, respectively: top-left (on 1) top central (on 2) top-right (on 3) left (on 4) right (on 6) bottom-left (on 7) bottom-center (on 8) bottom-right (on 9) The div 5 is attempt as main. But the whole strategy looks not so well-formed. Any tips? Thanks

    Read the article

  • Best way to model Customer <--> Address

    - by Jen
    Every Customer has a physical address and an optional mailing address. What is your preferred way to model this? Option 1. Customer has foreign key to Address Customer (id, phys_address_id, mail_address_id) Address (id, street, city, etc.) Option 2. Customer has one-to-many relationship to Address, which contains a field to describe the address type Customer (id) Address (id, customer_id, address_type, street, city, etc.) Option 3. Address information is de-normalized and stored in Customer Customer (id, phys_street, phys_city, etc. mail_street, mail_city, etc.) One of my overriding goals is to simplify the object-relational mappings, so I'm leaning towards the first approach. What are your thoughts?

    Read the article

  • Best way to reduce consecutive NAs to single NA

    - by digEmAll
    I need to reduce the consecutive NA's in a vector to a single NA, without touching the other values. So, for example, given a vector like this: NA NA 8 7 NA NA NA NA NA 3 3 NA -1 4 what I need to get, is the following result: NA 8 7 NA 3 3 NA -1 4 Currently, I'm using the following function: reduceConsecutiveNA2One <- function(vect){ enc <- rle(is.na(vect)) # helper func tmpFun <- function(i){ if(enc$values[i]){ data.frame(L=c(enc$lengths[i]-1, 1), V=c(TRUE,FALSE)) }else{ data.frame(L=enc$lengths[i], V=enc$values[i]) } } Df <- do.call(rbind.data.frame,lapply(1:length(enc$lengths),FUN=tmpFun)) return(vect[rep.int(!Df$V,Df$L)]) } and it seems to work fine, but probably there's a simpler/faster way to accomplish this task. Any suggestions ? Thanks in advance.

    Read the article

  • Best printing tools, components for windows.forms applications.

    - by mkus
    Hi, I have started to research for a printing component in our windows.forms based project. I want to benefit from your experiences. There are some popular reporting tools in .net world such as crystal report, active reports, xtrareports etc. Could you say advantages and disadvantages of these tools or other tools you suggest?

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >