Search Results

Search found 28593 results on 1144 pages for 'best pratices'.

Page 200/1144 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • what is the best way of giving the feedback to the user

    - by Nubkadiya
    im using speech recognition by pressing a button in my application. i want to show the users that when they click the button they should speech. i was thinking about using a progress bar. but i dont think its a good idea. then i thought about putting a label saying whats going on. can someone suggest any more options. please

    Read the article

  • Best indexing strategy for several varchar columns in Postgres

    - by Corey
    I have a table with 10 columns that need to be searchable (the table itself has about 20 columns). So the user will enter query criteria for at least one of the columns but possibly all ten. All non-empty criteria is then put into an AND condition Suppose the user provided non-empty criteria for column1 and column4 and column8 the query would be: select * from the_table where column1 like '%column1_query%' and column4 like '%column4_query%' and column8 like '%column8_query%' So my question is: am I better off creating 1 index with 10 columns? 10 indexes with 1 column each? Or do I need to find out what sets of columns are queried together frequently and create indexes for them (an index on cols 1,4 and 8 in the case above). If my understanding is correct a single index of 10 columns would only work effectively if all 10 columns are in the condition. Open to any suggestions here, additionally the rowcount of the table is only expected to be around 20-30K rows but I want to make sure any and all searches on the table are fast. Thanks!

    Read the article

  • Best means to store data locally when offline

    - by mickartz
    I am in the midst of writing a small program (more to experiment with vs 2010 than anything else) Despite being an experiment it has some practical use for our local athletics club. My thought was to access the DB (currently online) to download the current members and store locally on a laptop (this is a MS sql table, used to power the club's website). take the laptop to the event (yes there ARE places that don't have internet coverage), add members to that days race (also a row from a sql table (though no changes would be made to this), record results (new records in 3rd table) Once home, showered and within internet access again, upload/edit the tables as per the race results/member changes etc. So I was thinking i'd do something like write xml files locally with the data, including a field to indicate changes etc? If anyone can point me in a direction i would appreciate it...hell if anyone could tell me if this has a name, I'd appreciate it.

    Read the article

  • Best way to correct garbled data caused by false encoding

    - by ercan
    Hi all, I have a set of data that contains garbled text fields because of encoding errors during many import/exports from one database to another. Most of the errors were caused by converting UTF-8 to ISO-8859-1. Strangely enough, the errors are not consistent: the word 'München' appears as 'München' in some place and as 'MÃœnchen'. Is there a trick in SQL server to correct this kind of crap? The first thing that I can think of is to exploit the COLLATE clause, so that ü is interpreted as ü, but I don't exactly know how. If it isn't possible to make it in the DB level, do you know any tool that helps for a bulk correction? (no manual find/replace tool, but a tool that guesses the garbled text somehow and correct them)

    Read the article

  • Best wrapper for simultaneous API requests?

    - by bluebit
    I am looking for the easiest, simplest way to access web APIs that return either JSON or XML, with concurrent requests. For example, I would like to call the twitter search API and return 5 pages of results at the same time (5 requests). The results should ideally be integrated and returned in one array of hashes. I have about 15 APIs that I will be using, and already have code to access them individually (using simple a NET HTTP request) and parse them, but I need to make these requests concurrent in the easiest way possible. Additionally, any error handling for JSON/XML parsing is a bonus.

    Read the article

  • Best Format for a Software Engineer's Resume

    - by Adam Haile
    I am looking for good, objective ideas and examples of a resume for a Software Engineer. By all means, post a link to your own resume if you are comfortable with doing so. Mostly I am looking at how it should be formatted and what kind of information should be included (and in what order on the resume.)

    Read the article

  • Excel - Best Way to Connect With Access Data

    - by gamerzfuse
    Hello there, Here is the situation we have: a) I have an Access database / application that records a significant amount of data. Significant fields would be hours, # of sales, # of unreturned calls, etc b) I have an Excel document that connects to the Access database and pulls data in to visualize it As it stands now, the Excel file has a Refresh button that loads new data. The data is loaded into a large PivotTable. The main 'visual form' then uses VLOOKUP to get the results from the form, based on the related hours. This operation is slow (~10 seconds) and seems to be redundant and inefficient. Is there a better way to do this? I am willing to go just about any route - just need directions. Thanks in advance! Update: I have confirmed (due to helpful comments/responses) that the problem is with the data loading itself. removing all the VLOOKUPs only took a second or two out of the load time. So, the questions stands as how I can rapidly and reliably get the data without so much time involvement (it loads around 3000 records into the PivotTables).

    Read the article

  • CSS - Best way to do a border like this (with link to a website as example)

    - by markzzz
    Hi to everybody. I need to do a border for my website that looks like this one. The only way I know is to split the website with 9 div, such : 1 2 3 4 5 6 7 8 9 and create 8 images, respectively: top-left (on 1) top central (on 2) top-right (on 3) left (on 4) right (on 6) bottom-left (on 7) bottom-center (on 8) bottom-right (on 9) The div 5 is attempt as main. But the whole strategy looks not so well-formed. Any tips? Thanks

    Read the article

  • Best C++ development environment in Linux

    - by Bruce
    I have some experience with Eclipse and Qt creator and am somewhat disappointed in their debuggers, less so in their editors. On Windows, I like Visual Studio for debugging and SlickEdit for editing (SE is also available on Linux). Is there an IDE that is somehow better than the two mentioned?

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Best way to implement symfony admin components

    - by Chris T
    I am coding a backend in symfony using the sfThemePlugin (part of sympal). The dashboard should allow for new "admin plugins" to be added fairly easily. What I'd like is to have a config.yml config like this: sf_easy_admin_plugin: enabled_admin_dashboard_plugins: [Twitter, QuickBlogPost, QuickConfig] and when these are set it includes the correct components into the template. I'd like to have each one be in it's own plugin (sfTwitterEasyAdminModule, sfQuickBlogPostEasyAdminModule) or have them all bundled in one (sfEasyAdminModules). Is there anyway to accomplish this? As far I know symfonys include_component() only let's you include components from the current module and not from other plugins. Each "component" or admin plugin should render an icon for the dashboard and a html form that will be hidden until the user clicks the icon.

    Read the article

  • Best place to store large amounts of session data

    - by audiopleb
    I'm building an application that needs to store and re-use large amounts of data per session. So for example, the user selects a large list of list items (say 2000 or significantly more) which have a numeric value as their key then they save that selection and go off to another page, do something else and then come back to the original page and need to load their selections into that page. What is the quickest and most efficient way of storing and reusing that data? In a text file saved with the session id? In a temp db table? In the session data itself (db sessions so size isn't a limit) using a serialised string or using gzcompress or gzencode? Any advice or insight would be great! Thank you!!!!

    Read the article

  • Best way to simulate a domain?

    - by John Isaacks
    I am going to build a website on a test server that will behave differently depending on which domain is used to access it (The real website will have multiple domains pointing to it). But how will I be able to simulate the different domains on the test server?

    Read the article

  • Maven best practice for generating artifacts for multiple environments [prod, test, dev] with CI/Hud

    - by jaguard
    I have a project that need to be deployed into multiple environments (prod, test, dev). The main differences mainly consist in configuration properties/files. My idea was to use profiles and overlays to copy/configure the specialized output. But I'm stuck into if I have to generate multiple artifacts with specialized classifiers (ex: "my-app-1.0-prod.zip/jar", "my-app-1.0-dev.zip/jar") or should I create multiple projects, one project for every environment ?! Should I use maven-assembly-plugin to generate multiple artifacts for every environment ? Anyway, I'll need to generate all them at once so it seams that the profiles does not fit ... still puzzled :( Any hints/examples/links will be more than welcomed. As a side issue, I'm also wondering how to achieve this in a CI Hudson/Bamboo to generate and deploy these generated artifacts for all the environments, to their proper servers (ex: using SCP Hudson plugin) ?

    Read the article

  • Best way to search for a saturation value in a sorted list

    - by AB Kolan
    A question from Math Battle. This particular question was also asked to me in one of my job interviews. " A monkey has two coconuts. It is fooling around by throwing coconut down from the balconies of M-storey building. The monkey wants to know the lowest floor when coconut is broken. What is the minimal number of attempts needed to establish that fact? " Conditions: if a coconut is broken, you cannot reuse the same. You are left with only with the other coconut Possible approaches/strategies I can think of are Binary break ups & once you find the floor on which the coconut breaks use upcounting from the last found Binary break up lower index. Window/Slices of smaller sets of floors & use binary break up within the Window/Slice (but on the down side this would require a Slicing algorithm of it's own.) Wondering if there are any other way to do this.

    Read the article

  • What are CDN Best Practices?

    - by Wild Thing
    Hi, I have recently started using the Rackspace Cloudfiles CDN (Limelight), about which I have some questions: I am using jQuery, jQuery UI and jQuery tools in addition to custom JS code. Also, my site is written in ASP.Net, which means there is some ASP.Net generated JS code. Right now what I have done is that I have combined all of the js (including the jquery code), except the ASP.Net generated JS into one file. I am hosting this on the Rackspace CDN. I am wondering if it would make more sense to just get the jQuery, jQuery UI files from the Google hosted CDN (which I suspect would work very well in serving these files, since they will be in many users' cache already)? This would mean one extra HTTP request, so I'm not sure if it'll help. Right now I have multiple containers for my assets. For example, in Rackspace I have 3 containers: JS, CSS and Images. The URL subdomain for all 3 is different. Will that lead to a performance penalty? Should I just use one container (and thus one domain for the CDN)? Is there a way of having the MS ASP.Net generated JS loaded from MS CDN? Would this have a performance hit as per the above question? Thanks in advance, WT

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >