Search Results

Search found 43093 results on 1724 pages for 'oracle best practice'.

Page 546/1724 | < Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >

  • Best means to store data locally when offline

    - by mickartz
    I am in the midst of writing a small program (more to experiment with vs 2010 than anything else) Despite being an experiment it has some practical use for our local athletics club. My thought was to access the DB (currently online) to download the current members and store locally on a laptop (this is a MS sql table, used to power the club's website). take the laptop to the event (yes there ARE places that don't have internet coverage), add members to that days race (also a row from a sql table (though no changes would be made to this), record results (new records in 3rd table) Once home, showered and within internet access again, upload/edit the tables as per the race results/member changes etc. So I was thinking i'd do something like write xml files locally with the data, including a field to indicate changes etc? If anyone can point me in a direction i would appreciate it...hell if anyone could tell me if this has a name, I'd appreciate it.

    Read the article

  • Excel - Best Way to Connect With Access Data

    - by gamerzfuse
    Hello there, Here is the situation we have: a) I have an Access database / application that records a significant amount of data. Significant fields would be hours, # of sales, # of unreturned calls, etc b) I have an Excel document that connects to the Access database and pulls data in to visualize it As it stands now, the Excel file has a Refresh button that loads new data. The data is loaded into a large PivotTable. The main 'visual form' then uses VLOOKUP to get the results from the form, based on the related hours. This operation is slow (~10 seconds) and seems to be redundant and inefficient. Is there a better way to do this? I am willing to go just about any route - just need directions. Thanks in advance! Update: I have confirmed (due to helpful comments/responses) that the problem is with the data loading itself. removing all the VLOOKUPs only took a second or two out of the load time. So, the questions stands as how I can rapidly and reliably get the data without so much time involvement (it loads around 3000 records into the PivotTables).

    Read the article

  • what is the best way to optimize my json on an asp.net-mvc site

    - by ooo
    i am currently using jqgrid on an asp.net mvc site and we have a pretty slow network (internal application) and it seems to be taking the grid a long time to load (the issue is both network as well as parsing, rendering) I am trying to determine how to minimized what i send over to the client to make it as fast as possible. Here is a simplified view of my controller action to load data into the grid: [AcceptVerbs(HttpVerbs.Get)] public ActionResult GridData1(GridData args) { var paginatedData = applications.GridPaginate(args.page ?? 1, args.rows ?? 10, i => new { i.Id, Name = "<div class='showDescription' id= '" + i.id+ "'>" + i.Name + "</div>", MyValue = GetImageUrl(_map, i.value, "star"), ExternalId = string.Format("<a href=\"{0}\" target=\"_blank\">{1}</a>", Url.Action("Link", "Order", new { id = i.id }), i.Id), i.Target, i.Owner, EndDate = i.EndDate, Updated = "<div class='showView' aitId= '" + i.AitId + "'>" + GetImage(i.EndDateColumn, "star") + "</div>", }) return Json(paginatedData); } So i am building up a json data (i have about 200 records of the above) and sending it back to the GUI to put in the jqgrid. The one thing i can thihk of is Repeated data. In some of the json fields i am appending HTML on top of the raw "data". This is the same HTML on every record. It seems like it would be more efficient if i could just send the data and "append" the HTML around it on the client side. Is this possible? Then i would just be sending the actual data over the wire and have the client side add on the rest of the HTML tags (the divs, etc) be put together. Also, if there are any other suggestions on how i can minimize the size of my messages, that would be great. I guess at some point these solution will increase the client side load but it may be worth it to cut down on network traffic.

    Read the article

  • Best indexing strategy for several varchar columns in Postgres

    - by Corey
    I have a table with 10 columns that need to be searchable (the table itself has about 20 columns). So the user will enter query criteria for at least one of the columns but possibly all ten. All non-empty criteria is then put into an AND condition Suppose the user provided non-empty criteria for column1 and column4 and column8 the query would be: select * from the_table where column1 like '%column1_query%' and column4 like '%column4_query%' and column8 like '%column8_query%' So my question is: am I better off creating 1 index with 10 columns? 10 indexes with 1 column each? Or do I need to find out what sets of columns are queried together frequently and create indexes for them (an index on cols 1,4 and 8 in the case above). If my understanding is correct a single index of 10 columns would only work effectively if all 10 columns are in the condition. Open to any suggestions here, additionally the rowcount of the table is only expected to be around 20-30K rows but I want to make sure any and all searches on the table are fast. Thanks!

    Read the article

  • Best wrapper for simultaneous API requests?

    - by bluebit
    I am looking for the easiest, simplest way to access web APIs that return either JSON or XML, with concurrent requests. For example, I would like to call the twitter search API and return 5 pages of results at the same time (5 requests). The results should ideally be integrated and returned in one array of hashes. I have about 15 APIs that I will be using, and already have code to access them individually (using simple a NET HTTP request) and parse them, but I need to make these requests concurrent in the easiest way possible. Additionally, any error handling for JSON/XML parsing is a bonus.

    Read the article

  • Best way to correct garbled data caused by false encoding

    - by ercan
    Hi all, I have a set of data that contains garbled text fields because of encoding errors during many import/exports from one database to another. Most of the errors were caused by converting UTF-8 to ISO-8859-1. Strangely enough, the errors are not consistent: the word 'München' appears as 'München' in some place and as 'MÃœnchen'. Is there a trick in SQL server to correct this kind of crap? The first thing that I can think of is to exploit the COLLATE clause, so that ü is interpreted as ü, but I don't exactly know how. If it isn't possible to make it in the DB level, do you know any tool that helps for a bulk correction? (no manual find/replace tool, but a tool that guesses the garbled text somehow and correct them)

    Read the article

  • Best way to implement symfony admin components

    - by Chris T
    I am coding a backend in symfony using the sfThemePlugin (part of sympal). The dashboard should allow for new "admin plugins" to be added fairly easily. What I'd like is to have a config.yml config like this: sf_easy_admin_plugin: enabled_admin_dashboard_plugins: [Twitter, QuickBlogPost, QuickConfig] and when these are set it includes the correct components into the template. I'd like to have each one be in it's own plugin (sfTwitterEasyAdminModule, sfQuickBlogPostEasyAdminModule) or have them all bundled in one (sfEasyAdminModules). Is there anyway to accomplish this? As far I know symfonys include_component() only let's you include components from the current module and not from other plugins. Each "component" or admin plugin should render an icon for the dashboard and a html form that will be hidden until the user clicks the icon.

    Read the article

  • Best place to store large amounts of session data

    - by audiopleb
    I'm building an application that needs to store and re-use large amounts of data per session. So for example, the user selects a large list of list items (say 2000 or significantly more) which have a numeric value as their key then they save that selection and go off to another page, do something else and then come back to the original page and need to load their selections into that page. What is the quickest and most efficient way of storing and reusing that data? In a text file saved with the session id? In a temp db table? In the session data itself (db sessions so size isn't a limit) using a serialised string or using gzcompress or gzencode? Any advice or insight would be great! Thank you!!!!

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Best Format for a Software Engineer's Resume

    - by Adam Haile
    I am looking for good, objective ideas and examples of a resume for a Software Engineer. By all means, post a link to your own resume if you are comfortable with doing so. Mostly I am looking at how it should be formatted and what kind of information should be included (and in what order on the resume.)

    Read the article

  • Best way to simulate a domain?

    - by John Isaacks
    I am going to build a website on a test server that will behave differently depending on which domain is used to access it (The real website will have multiple domains pointing to it). But how will I be able to simulate the different domains on the test server?

    Read the article

  • CSS - Best way to do a border like this (with link to a website as example)

    - by markzzz
    Hi to everybody. I need to do a border for my website that looks like this one. The only way I know is to split the website with 9 div, such : 1 2 3 4 5 6 7 8 9 and create 8 images, respectively: top-left (on 1) top central (on 2) top-right (on 3) left (on 4) right (on 6) bottom-left (on 7) bottom-center (on 8) bottom-right (on 9) The div 5 is attempt as main. But the whole strategy looks not so well-formed. Any tips? Thanks

    Read the article

  • How to use Externel Triggers on Oracle 11g..

    - by RBA
    Hi, I want to fire a trigger whenever an insert command is fired.. The trigger will access a pl/sql file which can change anytime.. So the query is, if we design the trigger, how can we make sure this dynamic thing happens.. As during the stored procedure, it is not workingg.. I think - it should work for 1) External Procedures 2) Execute Statement Please correct me, if I am wrong.. I was working on External Procedures but i am not able to find the way to execute the external procedure from here on.. SQL> CREATE OR REPLACE FUNCTION Plstojavafac_func (N NUMBER) RETURN NUMBER AS 2 LANGUAGE JAVA 3 NAME 'Factorial.J_calcFactorial(int) return int'; 4 / @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ SQL> CREATE OR REPLACE TRIGGER student_after_insert 2 AFTER INSERT 3 ON student 4 FOR EACH ROW How to call the procedure from heree... And does my interpretations are right,, plz suggest.. Thanks.

    Read the article

  • Spring+JSP url building best practices

    - by dotsid
    I wonder if there are any good practices for addressing Spring controllers in JSP. Suppose I have controller: @Controller class FooController { // Don't bother about semantic of this query right now @RequestMapping("/search/{applicationId}") public String handleSearch(@PathVariable String applicationId) { [...] } } Of course in JSP I can write: <c:url value="/search/${application.id}" /> But it's very hard to change url then. If you familiar with Rails/Grails then you now how this problem resolved: redirect_to(:controller => 'foo', :action = 'search') But in Spring there is so much UrlMappers. Each UrlMapper have own semantic and binding scheme. Rails alike scheme simply doesn't work (unless you implement it yourself). And my question is: are there any more convenient ways to address controller from JSP in Spring?

    Read the article

  • Best way to reduce consecutive NAs to single NA

    - by digEmAll
    I need to reduce the consecutive NA's in a vector to a single NA, without touching the other values. So, for example, given a vector like this: NA NA 8 7 NA NA NA NA NA 3 3 NA -1 4 what I need to get, is the following result: NA 8 7 NA 3 3 NA -1 4 Currently, I'm using the following function: reduceConsecutiveNA2One <- function(vect){ enc <- rle(is.na(vect)) # helper func tmpFun <- function(i){ if(enc$values[i]){ data.frame(L=c(enc$lengths[i]-1, 1), V=c(TRUE,FALSE)) }else{ data.frame(L=enc$lengths[i], V=enc$values[i]) } } Df <- do.call(rbind.data.frame,lapply(1:length(enc$lengths),FUN=tmpFun)) return(vect[rep.int(!Df$V,Df$L)]) } and it seems to work fine, but probably there's a simpler/faster way to accomplish this task. Any suggestions ? Thanks in advance.

    Read the article

  • Best way to model Customer <--> Address

    - by Jen
    Every Customer has a physical address and an optional mailing address. What is your preferred way to model this? Option 1. Customer has foreign key to Address Customer (id, phys_address_id, mail_address_id) Address (id, street, city, etc.) Option 2. Customer has one-to-many relationship to Address, which contains a field to describe the address type Customer (id) Address (id, customer_id, address_type, street, city, etc.) Option 3. Address information is de-normalized and stored in Customer Customer (id, phys_street, phys_city, etc. mail_street, mail_city, etc.) One of my overriding goals is to simplify the object-relational mappings, so I'm leaning towards the first approach. What are your thoughts?

    Read the article

< Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >