Search Results

Search found 33194 results on 1328 pages for 'development approach'.

Page 73/1328 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • How many of you *really* surf around without JavaScript enabled? [closed]

    - by Stephen
    I've decided to rephrase the question. After some deliberation on Meta, I've realized that my question needs to be a bit more focused. The question: Should we (web developers) continue to spend effort progressively enhancing our web applications with JavaScript, ensuring that features gracefully degrade, thereby ensuring accessibility? Or should we spend that time focused on new features or other areas of development? The subtext of that question would be: How many of our customers/clients/users utilize our websites or applications with JavaScript disabled? Do you have any projects with requirements that specifically demand JavaScript functionality (almost all of mine do), and do those requirements also demand graceful degradation? For the sake of asking this question, I pulled up programmers.stackexchange.com without JavaScript enabled, and I was greeted with this message: "Programmers - Stack Exchange works best with JavaScript enabled". It was difficult to log in, albeit the site seemed to generally work okay. (I wasn't able to vote up any questions.) I think this is a satisfactory approach to development. Imagine the effort involved in making all of the site's features work with plain old HTML and server-side logic. OTOH, I wonder how many users have been alienated by this approach. We've all been trained (at least the good developers among us) to use progressive enhancement and to ensure our web applications' dynamic features degrade gracefully. Is this progressive enhancement just pissing into the wind, or do some of our customers actually utilize certain web services without JavaScript enabled? I mean, like really, not figuratively or presumptuously.

    Read the article

  • How important is graceful degradation of JavaScript? [closed]

    - by Stephen
    Should web developers continue to spend effort progressively enhancing our web applications with JavaScript, ensuring that features gracefully degrade, thereby ensuring accessibility? Or should we spend that time focused on new features or other areas of development? The subtext of that question would be: How many of our customers/clients/users utilize our websites or applications with JavaScript disabled? Do you have any projects with requirements that specifically demand JavaScript functionality (almost all of mine do), and do those requirements also demand graceful degradation? For the sake of asking this question, I pulled up programmers.stackexchange.com without JavaScript enabled, and I was greeted with this message: "Programmers - Stack Exchange works best with JavaScript enabled". It was difficult to log in, albeit the site seemed to generally work okay. (I wasn't able to vote up any questions.) I think this is a satisfactory approach to development. Imagine the effort involved in making all of the site's features work with plain old HTML and server-side logic. On the other hand, I wonder how many users have been alienated by this approach. We've all been trained (at least the good developers among us) to use progressive enhancement and to ensure our web applications' dynamic features degrade gracefully. Is this progressive enhancement just pissing into the wind, or do some of our customers actually utilize certain web services without JavaScript enabled?

    Read the article

  • How to handle editing a large file for a non-technical user

    - by Luke
    I have a client who is given a tab delimited .txt file containing hundreds of thousands of rows. I have a user story as follows: As a user I want to take the text file and add a new value at the end of each line which contains the concatenated value of two of the columns. for example if the file read text_one text_two I need to output the following (preferably to a .txt file) text_one text_two text_onetext_two My first approach was to ask the vendor supplying the file to do the concatenation before providing the file, the easiest way to solve a problem is to eliminate it right? however they are very uncooperative and have point blank refused. I've looked at building a simple javascript application that does this client side so a non-technical user could select the file using a file selector. This approach has a few problems The file could be over a GB in size and so can't be loaded straight into memory, I've tried and the browser crashes There is no means to write a file in javascript so I'd need to output the content to the screen and have the user save it (somehow) I was thinking if I could get around the filesize limitations I could just output the edited content to the page and have the user save the page as a .txt file, however I think there is a better way than using javascript that will still accommodate the users lack of technical know-how. Please consider this question to be stack agnostic, but bear in mind that a nice little shell script or python script would be deemed unsuitable for a non technical user unless there is a way of "packaging" it nicely for a non-technical user. Updates The file is too large to open in excel. The process needs to be run weekly, but it doesn't require scheduling or automation...(yet)

    Read the article

  • Experienced programmer, beginner at web design, tools for effective maintainable web design? [closed]

    - by Clinton
    I do quite a bit of programming in my work, which I'm comfortable with, but recently I've being trying to do some web-design for non-work related reasons. I've got a Drupal site up and running, and added some content. But they all look fairly basic. Header with some content. It doesn't look particularly polished. Anyway, as an example, what I wanted to do was make some "bubbles", each with some text in them. From a programmers point of view, say: bubble(question_text, answer_text) might expand to a box with some border, with "Question: " + question_text then "Answer: " + answer_text. Of course I'd have lots of these bubbles, but I'd like to change their look and feel in one place, so simple HTML would be a maintainable nightmare. I also want to lay them out on the screen in some fashion. I was thinking a mixture of javascript and CSS, or possibly use PHP which Drupal uses. On the other hand, I fear I might be taking a 1990s approach to this, and that there's actually tools available now that make this process a lot easier. I'm just wondering what the best approach to this sort of task is? Should I be using offline web design software and copying the code to Drupal, and if so, any recommendations? I'm sorry if my question is a bit vague, because I'm not really sure what question I should be asking. I'd appreciate if you answer and comment, and I'll try my best to be more specific as I understand more.

    Read the article

  • Should Developers Perform All Tasks or Should They Specialize?

    - by Bob Horn
    Disclaimer: The intent of this question isn't to discern what is better for the individual developer, but for the system as a whole. I've worked in environments where small teams managed certain areas. For example, there would be a small team for every one of these functions: UI Framework code Business/application logic Database I've also worked on teams where the developers were responsible for all of these areas and more (QA, analsyt, etc...). My current environment promotes agile development (specifically scrum) and everyone has their hands in every area mentioned above. While there are pros and cons to each approach, I'd be curious to know if there are more pros and cons than I list below, and also what the generally feeling is about which approach is better. Devs Do It All Pros 1. Developers may be more well-rounded 2. Developers know more of the system Cons 1. Everyone has their hands in all areas, increasing the probability of creating less-than-optimal results in that area 2. It can take longer to do something with which you are unfamiliar (jack of all trades, master of none) Devs Specialize Pros 1. Developers can create policies and procedures for their area of expertise and more easily enforce them 2. Developers have more of a chance to become deeply knowledgeable about their specific area and make it the best it can be 3. Other developers don't cross boundaries and degrade another area Cons 1. As one colleague put it: "Why would you want to pigeon-hole yourself like that?" (Meaning some developers won't get a chance to work in certain areas.) It's easy to say how wonderful agile is, and that we should do it all, but I'm somewhat of a fan of having areas of expertise. Without that expertise, I've seen code degrade, database schemas become difficult to manage, hack UI code, etc... Let's face it, some people make careers out of doing just UI work, or just database work. It's not that easy to just fill in and do as good of a job as an expert in that area.

    Read the article

  • Being stupid to get better productivity?

    - by loki2302
    I've spent a lot of time reading different books about "good design", "design patterns", etc. I'm a big fan of the SOLID approach and every time I need to write a simple piece of code, I think about the future. So, if implementing a new feature or a bug fix requires just adding three lines of code like this: if(xxx) { doSomething(); } It doesn't mean I'll do it this way. If I feel like this piece of code is likely to become larger in the nearest future, I'll think of adding abstractions, moving this functionality somewhere else and so on. The goal I'm pursuing is keeping average complexity the same as it was before my changes. I believe, that from the code standpoint, it's quite a good idea - my code is never long enough, and it's quite easy to understand the meanings for different entities, like classes, methods, and relations between classes and objects. The problem is, it takes too much time, and I often feel like it would be better if I just implemented that feature "as is". It's just about "three lines of code" vs. "new interface + two classes to implement that interface". From a product standpoint (when we're talking about the result), the things I do are quite senseless. I know that if we're going to work on the next version, having good code is really great. But on the other side, the time you've spent to make your code "good" may have been spent for implementing a couple of useful features. I often feel very unsatisfied with my results - good code that only can do A is worse than bad code that can do A, B, C, and D. Are there any books, articles, blogs, or your ideas that may help with developing one's "being stupid" approach?

    Read the article

  • How to plan/manage multi-platform (mobile) products?

    - by PhD
    Say I've to develop an app that runs on iOS, Android and Windows 8 Mobile. Now all three platforms are technically in different program languages. The only 'reuse' that I can see is that of the boxes-and-lines drawings (UML :) charts and nothing else. So how do companies/programmers manage the variation of the same product across different platforms especially since the implementation languages differ? It's 'easier' in the desktop world IMO given the plethora of languages and cross-platform libraries to make your life easier. Not so in the mobile world. More so, product line management principles don't seem to be all that applicable - what is same and variant doesn't really matter - the application is the same (conceptually) and the implementation is variant. Some difficulties that come to mind: Bug Fixing: Applications maybe designed in a similar manner but the bug identification and fixing would be radically different. A bug on iOS may/may-not be existent for that on Android. Or a bug fix approach on one platform may not be the same on another (unless it's a semantic bug like a!=b instead of a==b which would require the same 'approach' to fixing in essence Enhancements: Making a change on one platform would be radically different than on another Code-Design Divergence: They way the code is written/organized, the class structures etc., could be very different given the different implementation environments - leading to further reuse of the (above) UML models. There are of course many others - just keeping the development in sync and making sure all applications are up to the same version with the same set of features etc. Seems the effort is 3x that of a single application. So how exactly does one manage this nightmarish situation? Some thoughts: Split application to client/server to minimize the effect to client side only (not always doable) Use frameworks like Unity-3D that could take care of the cross-platform problem (mostly applicable to games and probably not to other applications etc.) Any other ways of managing a platform line? What are some proven approaches to managing/taming the effects?

    Read the article

  • On Developing Web Services with Global State

    - by user74418
    I'm new to web programming. I'm more experienced and comfortable with client-side code. Recently, I've been dabbling in web programming through Python's Google App Engine. I ran into some difficulty while trying to write some simple apps for the purposes of learning, mainly involving how to maintain some kind of consistent universally-accessible state for the application. I tried to write a simple queueing management system, the kind you would expect to be used in a small clinic, or at a cafeteria. Typically, this is done with hardware. You take a number from a ticketing machine, and when your number is displayed or called you approach the counter for service. Alternatively, you could be given a small pager, which will beep or vibrate when it is your turn to receive service. The former is somewhat better in that you have an idea of how many people are still ahead of you in the queue. In this situation, the global state is the last number in queue, which needs to be updated whenever a request is made to the server. I'm not sure how to best to store and maintain this value in a GAE context. The solution I thought of was to keep the value in the Datastore, attempt to query it during a ticket request, update the value, and then re-store it with put. My problem is that I haven't figured out how to lock the resource so that other requests do not check the value while it is in the middle of being updated. I am concerned that I may end up ticket requests that have the same queue number. Also, the whole solution feels awkward to me. I was wondering if there was a more natural way to accomplish this without having to go through the Datastore. Can anyone with more experience in this domain provide some advice on how to approach the design of the above application?

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow. Is there anyway that the server can auto-detect wither to use the development or production configs?

    Read the article

  • CREON development advice

    - by Matthew Hood
    We have a client who has requested a new project with development to be done on a CREON terminal (Spectra Technologies). Can anyone recommend some good resources about terminal and in particular CREON development. Or know of a reputable development team specializing in CREON development. I have done a fair bit of googling / binging without much success.

    Read the article

  • django development IDE

    - by Adam Carr
    I have done a little django development but it has all been in a text editor. I was curious what more advanced development tools others are using in their django development. I am used to using Visual Studio for development and really like the intellisense, code completion, and file organization it provides and would like to find something (or a combination of tools) that would provide some of this in the django/python environment.

    Read the article

  • How to Install PCRE Development Headers on Mac OSX

    - by martincarlin87
    I just upgraded my MacBook Pro to Mavericks and my local Ruby on Rails development environment isn't running straight off the bat, when I visit localhost I see It works! and remembered I needed to start Phusion Passenger, so when I run passenger start it checks all the requisites and fails when it gets to the PCRE Development Headers: * Checking for PCRE development headers... Found: no It tells me to go to http://www.pcre.org/ to download them so I downloaded 8.33 from here which went to my Downloads folder, so I unzipped it, cd'd to the folder and ran: ./configure make make install Then cd'd back to my rails app directory on my Desktop and re-ran passenger start but it's still the same. Tried a new Terminal window but that didn't make any difference. I must have done this before to get my dev environment working but can't seem to solve it this time. I also tried brew install pcre but it says Warning: pcre-8.33 already installed.

    Read the article

  • ASP Guidance - Development on laptop with limited internet access (nonhosted)

    - by Joshua Enfield
    I am fairly experienced with the .NET family of languages, as well as web development (from a PHP perspective.) I am home for winter break and have limited internet access but would like to learn ASP using C#. Am I able to do development for ASP (and see the results) for free on my laptop (with no internet access), and if so what tools do I need? Ideally I'd like to do development in Visual Studio and see the results in my browser via localhost. Extra tools I might need would be helpful as well.

    Read the article

  • emacs frustration with web development any working dot-files?

    - by Tony Cruise
    I really liked flexibility of emacs but it is really annoying to make it work. I want to use it for web development html, css, javascript, php. I first tried emacs-starter-kit . It didn't included nXhtml. Also C-g key binding does not work (they call it starter kit but basic key command does not work). I think it is mapped for git control. That's a frustration for a beginner. Then I replaced emacs-starter-kit with nXhtml. At least C-g is working. But code completion sucks, M-tab does not work. I tried code completion from nXhtml menu with no success. Also NXhtml mode did'nt colorized my file if css is mixed with html. Isn't it recommended for mixed html, css,php files. So why it doesnt work?. Why Emacs folks do not aware of convention over configuration? Dam! ship it something works! Please help me before I am getting crazy. I use Ubuntu 10.04 and emacs-snaphot-gtk 23.1.50-1. Please guide me step by step with your working dotfile url. Even I accept I am a dummy, it is really annoying and frustrating to use emacs.

    Read the article

  • What constitutes a development environment, and how do you document it?

    - by Joel Coehoorn
    What items go into a software shop's development environment, how do you document it, and what processes do you follow to make changes? I thinking about this from the standpoint where I want to make it easier to bring new hires up to speed quickly by having all this on a checklist we follow when setting them up, and then while I'm at it making it easier for the new hires or existing team members to bring new powerful toolkits and ideas into the environment without disrupting things. I want to keep this platform agnostic, so even though I'm currently at a microsoft shop where Visual Studio would be assumed I'll go ahead and list compiler/IDE as one of the items: Here are some ideas for part 1: [edit]: I'm keeping this updated based on the better suggestions. Source Control access Issue/Bug/Project tracker System Documention, or references to find the system documentation in source control or in a wiki, including: build document/environment covered by this question design documents / technical notes Coding Style guidelines Deploy for review/testing/QA/staging/production procedures Licensing details for your tools and your product Team Calendar, including the project schedule(s), deadlines, vacation time, and support/on-call schedule (if required) compiler/IDE compiler/IDE extensions (things like source control plugins or visual studio add-ins) 3rd party SDKs/toolkits Database connection and tools Testing Frameworks Internal libraries communication tools (chat, wiki, etc) Static analysis tools (FxCop, FlawFinder, etc) Virtual machines (holding dev environment or for testing) Specialized editors (modeling, xml, etc) Other tools What else goes in this list, and how do you document it and vet changes?

    Read the article

  • What should a teen dev do for practical experience in development?

    - by aviraldg
    What should a teen dev do for practical experience? If you want more details , then read on: I learnt programming when I was 9 , with GWBASIC (which I now hate) , which was what was taught @ school. That was done in a month. After that I learnt C++ and relearnt it (as I didn't know of templates and the STL before that) Recently I learnt PHP , SQL and Python. This was around the time I switched over to Ubuntu. I'd always loved the "GNUish" style of software development so I jumped right in. However , most of the projects that I found required extensive knowledge of their existing codebase. So , right now I'm this guy who knows a couple of languages and has written a couple of small programs ... but hasn't gone "big", if you get it. I would love suggestions of projects that are informal and small to medium sized , and do not require much knowledge of the codebase. Also note that I've looked at things like Google Summer of Code and sites like savannah.gnu.org and the first doesn't apply , since I'm still in school and the latter either has infeasable projects , or things that are too hard.

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

  • Simulating O_NOFOLLOW (2): Is this other approach safe?

    - by Daniel Trebbien
    As a follow-up question to this one, I thought of another approach which builds off of @caf's answer for the case where I want to append to file name and create it if it does not exist. Here is what I came up with: Create a temporary directory with mode 0700 in a system temporary directory on the same filesystem as file name. Create an empty, temporary, regular file (temp_name) in the temporary directory (only serves as placeholder). Open file name for reading only, just to create it if it does not exist. The OS may follow name if it is a symbolic link; I don't care at this point. Make a hard link to name at temp_name (overwriting the placeholder file). If the link call fails, then exit. (Maybe someone has come along and removed the file at name, who knows?) Use lstat on temp_name (now a hard link). If S_ISLNK(lst.st_mode), then exit. open temp_name for writing, append (O_WRONLY | O_APPEND). Write everything out. Close the file descriptor. unlink the hard link. Remove the temporary directory. (All of this, by the way, is for an open source project that I am working on. You can view the source of my implementation of this approach here.) Is this procedure safe against symbolic link attacks? For example, is it possible for a malicious process to ensure that the inode for name represents a regular file for the duration of the lstat check, then make the inode a symbolic link with the temp_name hard link now pointing to the new, symbolic link? I am assuming that a malicious process cannot affect temp_name.

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • Recommended approach for error handling with PHP and MYSQL

    - by iama
    I am trying to capture database (MYSQL) errors in my PHP web application. Currently, I see that there are functions like mysqli_error(), mysqli_errno() for capturing the last occurred error. However, this still requires me to check for error occurrence using repeated if/else statements in my php code. You may check my code below to see what I mean. Is there a better approach to doing this? (or) Should I write my own code to raise exceptions and catch them in one single place? What is the recommended approach? Also, does PDO raise exceptions? Thanks. function db_userexists($name, $pwd, &$dbErr) { $bUserExists = false; $uid = 0; $dbErr = ''; $db = new mysqli(SERVER, USER, PASSWORD, DB); if (!mysqli_connect_errno()) { $query = "select uid from user where uname = ? and pwd = ?"; $stmt = $db->prepare($query); if ($stmt) { if ($stmt->bind_param("ss", $name, $pwd)) { if ($stmt->bind_result($uid)) { if ($stmt->execute()) { if ($stmt->fetch()) { if ($uid) $bUserExists = true; } } } } if (!$bUserExists) $dbErr = $db->error(); $stmt->close(); } if (!$bUserExists) $dbErr = $db->error(); $db->close(); } else { $dbErr = mysqli_connect_error(); } return $bUserExists; }

    Read the article

  • TDD approach for complex function

    - by jamie
    I have a method in a class for which they are a few different outcomes (based upon event responses etc). But this is a single atomic function which is to used by other applications. I have broken down the main blocks of the functionality that comprise this function into different functions and successfully taken a Test Driven Development approach to the functionality of each of these elements. These elements however aren't exposed for other applications would use. And so my question is how can/should i easily approach a TDD style solution to verifying that the single method that should be called does function correctly without a lot of duplication in testing or lots of setup required for each test? I have considered / looked at moving the blocks of functionality into a different class and use Mocking to simulate the responses of the functions used but it doesn't feel right and the individual methods need to write to variables within the main class (it felt really heath robinson). The code roughly looks like this (i have removed a lot of parameters to make things clearer along with a fair bit of irrelevant code). public void MethodToTest(string parameter) { IResponse x = null; if (function1(parameter)) { if (!function2(parameter,out x)) { function3(parameter, out x); } } // ... // more bits of code here // ... if (x != null) { x.Success(); } }

    Read the article

  • best PHP programming approach?

    - by Nazmin
    Hello guys, i guess that many of us has acceptable years of experience in PHP programming, me myself got 5 years programming in PHP since in University (not very solid), i just want to gather some suggestion here, what is better approach in PHP OOP? Since there is a lot of PHP framework out there as well as javascript framework, can anyone share their experience in using one of them as well as using both of the PHP and javascript framework together especially for developing enterprise system? (my company want to start using framework, ISP company) if someone has his/her own framework or class or php file so on and so forth, mind to share? now I've only database connection class. I've never use any php framework before, just jquery as javascript framework, my company has a messed up of programming approach because they did not actually a software house, so all the files are scattered across the server, I see that there are some question before about the framework but i just want to know you all approaches on OOP and in developing massive web apps using PHP, or maybe someone kind enough to share their solid written class. Also pro's and con's of all this things. Thanks in advance.

    Read the article

  • Best approach to cache Counts from SQL tables ?

    - by pixel3cs
    I would like to develop a Forum from scratch, with special needs and customization. I would like to prepare my forum for intensive usage and wondering how to cache things like User posts count and User replies count. Having only three tables, tblForum, tblForumTopics, tblForumReplies, what is the best approach of cache the User topics and replies counts ? Think at a simple scenario: user press a link and open the Replies.aspx?id=x&page=y page, and start reading replies. On the HTTP Request, the server will run an SQL command wich will fetch all replies for that page, also "inner joining with tblForumReplies to find out the number of User replies for each user that replied." select tblForumReplies.*, tblFR.TotalReplies from tblForumReplies inner join ( select IdRepliedBy, count(*) as TotalReplies from tblForumReplies group by IdRepliedBy ) as tblFR on tblFR.IdRepliedBy = tblForumReplies.IdRepliedBy Unfortunately this approach is very cpu intensive, and I would like to see your ideas of how to cache things like table Counts. If counting replies for each user on insert/delete, and store it in a separate field, how to syncronize with manual data changing. Suppose I will manually delete Replies from SQL.

    Read the article

  • The right approach to loading dynamic content into a UITableView in iOS

    - by OS.
    ok, I've read tons of bits and pieces on the subject of loading dynamic content (from the web) into a UITableView and the problem with calculating cell height upfront. I've tried different simple implementations but the problem persists... Assuming I need to read a JSON file from the web, parse it into 'item' objects, each with variable size image and various text labels, here is what I believe would be the right approach to avoid long hang time of the app while everything is loading: on app load read JSON file and parse into items array provide only small part of the items array to the tableview (about 10 items) - since I need to load the images associated with each item to calculate cell height - I don't want the view to go through the whole items list and load all images - this hangs the app until every image is loaded display the tableview with the available cells (assuming I load a few 'spare' ones, user can even scroll to more items) in the background using Grand Central Dispatch download images for all/some of the remaining items and then reload the tableview with the new data (repeat step 4 if item list is very long) Step 2 above is necessary since I have no way to calculate the cell height without loading the images first, and since tableview first calculates height of all cells it may take a very long time to download all images for all items. Would you say this is the right approach? am I missing something?

    Read the article

  • Is my approach for persistent login secure ?

    - by Jay
    I'm very much stuck with the reasonable secure approach to implement 'Remember me' feature in a login system. Here's my approach so far, Please advice me if it makes sense and is reasonably secure: Logging: User provides email and password to login (both are valid).. Get the user_id from DB Table Users by comparing provided email Generate 2 random numbers hashed strings: key1, key2 and store in cookies. In DB Table COOKIES, store key1, key2 along with user_id. To Check login: If key1 and key2 both cookies exist, validate both keys in DB Table COOKIES (if a row with key1, and key2 exists, user is logged). if cookie is valid, regenrate key2 and update it in cookie and also database. Why re-genrating key: Because if someone steals cookie and login with that cookie, it will be working only until the real user login. When the real user will login, the stolen cookie will become invalid. Right? Why do I need 2 keys: Because if i store user_id and single key in cookie and database, and the user want to remember the password on another browser, or computer, then the new key will be updated in database, so the user's cookie in earlier browser/PC will become invalid. User wont be able to remember password on more than one place. Thanks for your opinions.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >