Search Results

Search found 3244 results on 130 pages for 'proof of concept'.

Page 7/130 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Future proofing client-server code?

    - by Koran
    Hi, We have a web based client-server product. The client is expected to be used in the upwards of 1M users (a famous company is going to use it). Our server is set up in the cloud. One of the major questions while designing is how to make the whole program future proof. Say: Cloud provider goes down, then move automatically to backup in another cloud Move to a different server altogether etc The options we thought till now are: DNS: Running a DNS name server on the cloud ourselves. Directory server - The directory server also lives on the cloud Have our server returning future movements and future URLs etc to the client - wherein the client is specifically designed to handle those scenarios Since this should be a usual problem, which is the best solution for the same? Since our company is a very small one, we are looking at the least technically and financially expensive solution (say option 3 etc)? Could someone provide some pointers for the same? K

    Read the article

  • Valid Email Addresses - XSS and SQL Injection

    - by PAAMAYIM_NEKUDOTAYIM
    Since there are so many valid characters for email addresses, are there any valid email addresses that can in themselves be XSS attacks or SQL injections? I couldn't find any information on this on the web. The local-part of the e-mail address may use any of these ASCII characters: Uppercase and lowercase English letters (a–z, A–Z) Digits 0 to 9 Characters ! # $ % & ' * + - / = ? ^ _ ` { | } ~ Character . (dot, period, full stop) provided that it is not the last character, and provided also that it does not appear two or more times consecutively (e.g. [email protected]). http://en.wikipedia.org/wiki/E-mail_address#RFC_specification I'm not asking how to prevent these attacks (I'm already using parametrized queries and HTML purifier), this is more a proof-of-concept. The first thing that came to mind was 'OR [email protected], except that spaces are not allowed. Do all SQL injections require spaces?

    Read the article

  • What would you call the concept of CofeeScript or Sass to be?

    - by MaG3Stican
    There is this rising trend with web development of making new pseudo languages to extend the functionality of JavaScript, CSS and HTML given that those are static and their metamorphosis or evolution is painfully slow due to the variety of browser providers. So I am currently having a concept dilema on how to categorize them for a book I was made to write by my employer as no one seems to have a name for these pseudo languages. A tiny list of them : JavaScript: LiveScript, Metalua, Uberscript, EmberScript. HTML: Razor, Java Scriptlets. CSS : LESS, Sass. I believe the concept of these pseudo languages and a language or an extension of a language is quite different. First these languages do not extend any functionality currently existing on HTML or CSS or JavaScript, they simply work around it. And also they do not "compile" to an intermediate language, they are merely 1-1 translated to something that only then can be compiled. What would you call them?

    Read the article

  • What would you think of a job based on mostly doing the proof of concept?

    - by davsan
    I'm working as a developer in a small software company whose main job is interfacing between separate applications, like between a telephony system and an environment control system, between IP TVs and hospitality systems, etc...And it seems like I am the candidate for a new job title in the company, as the person who does the proof of concept of a new interfacing project and does some R&D for prototyping. What do you think the pros and cons of such a job would be, considering mainly the individual progress/regress of a person as a software engineer? And what aspects would you consider essential in a person to put him/her in such a job position?

    Read the article

  • Ask the Readers: How Do You Set Up a Novice-Proof Computer?

    - by Jason Fitzpatrick
    You’re into technology, you like tweaking and tinkering with computers, and, most importantly, you know how to keep your computer from turning into a virus-laden and fiery wreck. What about the rest of your family and friends? How do you set up a novice-proof computer to keep them secure, updated, and happy? It’s no small task protecting a computer from an inexperienced user, but for the benefit of both the novice and the innocent computer it’s an important undertaking. This week we want to hear all about your tips, tricks, and techniques for configuring the computers of your friends and relatives to save them from themselves (and keep their computer running smoothly in the process). Sound off in the comments with your tricks and check back in on Friday for the What You Said roundup to add see how your fellow readers get the job done. How Hackers Can Disguise Malicious Programs With Fake File Extensions Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer

    Read the article

  • What trivial real-life example do you use to explain programming to total non-programmers?

    - by anon
    Programmers seem to live in a world of their own (as this site indicates), with their own vibrant culture - and their own premises and vocabulary. Once we've been in the field for a bit, we take a lot of things for granted. But I'm often faced with the question "What do you do?" Or "What is programming?" I generally try to answer this with a small, often trivial, real-world example of how programming is prevalent in our everyday lives and keeps things running. The example I use most often is an elevator - someone has to program the logic of that... And I've seen elevators that are "smart" and ones that are quite backwards and foolish. (And you can easily understand if/decision and looping from that... incorporates a lot of important programming concepts in a very small example.) I've sometimes heard people use traffic lights as an example. What example do you / would you use to explain the concept of programming to someone completely clueless?

    Read the article

  • How do people prove the correctness of Computer Vision methods?

    - by solvingPuzzles
    I'd like to pose a few abstract questions about computer vision research. I haven't quite been able to answer these questions by searching the web and reading papers. How does someone know whether a computer vision algorithm is correct? How do we define "correct" in the context of computer vision? Do formal proofs play a role in understanding the correctness of computer vision algorithms? A bit of background: I'm about to start my PhD in Computer Science. I enjoy designing fast parallel algorithms and proving the correctness of these algorithms. I've also used OpenCV from some class projects, though I don't have much formal training in computer vision. I've been approached by a potential thesis advisor who works on designing faster and more scalable algorithms for computer vision (e.g. fast image segmentation). I'm trying to understand the common practices in solving computer vision problems.

    Read the article

  • Formal Equivalence between programming languages

    - by Ketan
    Hello We have 2 languages which are (informally) semantically equivalent but syntactically different. One is xml and another is script based. How can I go about formally proving that both languages are in fact equivalent. Script approach is just a convenient way to write a same program that would be tedious to write in xml. Thanks Ketan

    Read the article

  • Prove correctness of unit test

    - by Timo Willemsen
    I'm creating a graph framework for learning purposes. I'm using a TDD approach, so I'm writing a lot of unit tests. However, I'm still figuring out how to prove the correctness of my unit tests For example, I have this class (not including the implementation, and I have simplified it) public class SimpleGraph(){ //Returns true on success public boolean addEdge(Vertex v1, Vertex v2) { ... } //Returns true on sucess public boolean addVertex(Vertex v1) { ... } } I also have created this unit tests @Test public void SimpleGraph_addVertex_noSelfLoopsAllowed(){ SimpleGraph g = new SimpleGraph(); Vertex v1 = new Vertex('Vertex 1'); actual = g.addVertex(v1); boolean expected = false; boolean actual = g.addEdge(v1,v1); Assert.assertEquals(expected,actual); } Okay, awesome it works. There is only one crux here, I have proved that the functions work for this case only. However, in my graph theory courses, all I'm doing is proving theorems mathematically (induction, contradiction etc. etc.). So I was wondering is there a way I can prove my unit tests mathematically for correctness? So is there a good practice for this. So we're testing the unit for correctness, instead of testing it for one certain outcome.

    Read the article

  • Have I checked every consecutive subset of this list?

    - by Nathan
    I'm trying to solve problem 50 on Project Euler. Don't give me the answer or solve it for me, just try to answer this specific question. The goal is to find the longest sum of consecutive primes that adds to a prime below one million. I use wrote a sieve to find all the primes below n, and I have confirmed that it is correct. Next, I am going to check the sum of each subset of consecutive primes using the following method: I have a empty list sums. For each prime number, I add it to each element in sums and check the new sum, then I append the prime to sums. Here it is in python primes = allPrimesBelow(1000000) sums = [] for p in primes: for i in range(len(sums)): sums[i] += p check(sums[i]) sums.append(p) I want to know if I have called check() for every sum of two or more consecutive primes below one million The problem says that there is a prime, 953, that can be written as the sum of 21 consecutive primes, but I am not finding it.

    Read the article

  • Is ther any DLL or DLL like concept in Android?

    - by Prashant
    Hello, We know that we can use a concept "Java Package" but I just wanted to know that whether Android has provided a DLL or DLL like concept where we can write a most of the functionality. Or can we use Activity for serving a purpose of DLL. Can any one tell me is there any concept like DLL on Android OS? Can we develop a DLL for better modularization and other benefits on Android? Thanks and Regards, Prashant.

    Read the article

  • Where does this concept of "favor composition over inheritance" come from?

    - by Mason Wheeler
    In the last few months, the mantra "favor composition over inheritance" seems to have sprung up out of nowhere and become almost some sort of meme within the programming community. And every time I see it, I'm a little bit mystified. It's like someone said "favor drills over hammers." In my experience, composition and inheritance are two different tools with different use cases, and treating them as if they were interchangeable and one was inherently superior to the other makes no sense. Also, I never see a real explanation for why inheritance is bad and composition is good, which just makes me more suspicious. Is it supposed to just be accepted on faith? Liskov substitution and polymorphism have well-known, clear-cut benefits, and IMO comprise the entire point of using object-oriented programming, and no one ever explains why they should be discarded in favor of composition. Does anyone know where this concept comes from, and what the rationale behind it is?

    Read the article

  • Does semi-normalization exist as a concept? Is it "normalized"?

    - by Gracchus
    If you don't mind, a tldr on my experience: My experience tldr I have an application that's heavily dependent upon uncertainty, a bane to database design. I tried to normalize it as best as I could according to the capabilities of my database of choice, but a "simple" query took 50ms to read. Nosql appeals to me, but I can't trust myself with it, and besides, normalization has cut down my debugging time immensely over and over. Instead of 100% normalization, I made semi-redundant 1:1 tables with very wide primary keys and equivalent foreign keys. Read times dropped to a few ms, and write times barely degraded. The semi-normalized point Given this reality, that anyone who's tried to rely upon views of fully normalized data is aware of, is this concept codified? Is it as simple as having wide unique and foreign keys, or are there any hidden secrets to this technique? Or is uncertainty merely a special case that has extremely limited application and can be left on the ash heap?

    Read the article

  • What is the concept behind writing a cancel operation in c++?

    - by ToMan
    I'm attempting to write a cancel operation for a software download application. This application will first transfer the software to the device and then install the software on it. (These are givens I'm not allowed to change). What should the cancel operation do? When a user presses 'cancel', the application should stop transferring/installing the software immediately. Question: Since I've never written a "cancel" function, I'm wondering what are the types of things to consider when writing the code, and what are the common bugs I should expect and how to deal with them? Couldn't find anything in google so if you have some links that would be good reads I'd really appreciate it since I'm not looking for answers I'm just looking for guidelines/macro/concept help

    Read the article

  • What do you call the concept of dynamic data definition?

    - by DJTripleThreat
    Maybe this is simpler and more straightforward then what I'm thinking but I can't seem to find this concept on google anywhere. The concept is this: You have a table in a database and the table has a specified number of columns. However, it has been asked of me by previous clients that there also be a set of dynamic user defined columns that can be added on the fly. What is this concept called and is it considered a design pattern?

    Read the article

  • Recursion Vs Loops

    - by sachin
    I am trying to do work with examples on Trees as given here: http://cslibrary.stanford.edu/110/BinaryTrees.html These examples all solve problems via recursion, I wonder if we can provide a iterative solution for each one of them, meaning, can we always be sure that a problem which can be solved by recursion will also have a iterative solution, in general. If not, what example can we give to show a problem which can be solved only by recursion/Iteration? --

    Read the article

  • jQuery's draggable grid

    - by Art
    It looks like that the 'grid' option in the constructor of Draggable is relatively bound to the original coordinates of the element being dragged - so simply put, if you have three draggable divs with their top set respectively to 100, 200, 254 pixels relative to their parent: <div class="parent-div" style="position: relative;"> <div id="div1" class="draggable" style="top: 100px; position: absolute;"></div> <div id="div2" class="draggable" style="top: 200px; position: absolute;"></div> <div id="div3" class="draggable" style="top: 254px; position: absolute;"></div> </div> Adn all of them are getting enabled for dragging with 'grid' set to [1, 100]: draggables = $('.draggable'); $.each(draggables, function(index, elem) { $(elem).draggable({ containment: $('#parent-div'), opacity: 0.7, revert: 'invalid', revertDuration: 300, grid: [1, 100], refreshPositions: true }); }); Problem here is that as soon as you drag div3, say, down, it's top is increased by 100, moving it to 354px instead of being increased by just mere 46px (254 + 46 = 300), which would get it to the next stop in the grid - 300px, if we are looking at the parent-div as a point of reference and "grid holder". I had a look at the draggable sources and it seem to be an in-built flaw - they just do all the calculations relative to the original position of the draggable element. I would like to avoid monkey-patching the code of draggable library and what I am really looking for here is the way how to make the Draggable calculate the grid positions relative to containing parent. However if monkey-patching is unavoidable, I guess I'll have to live with it. Thanks!

    Read the article

  • Security question

    - by Syom
    in my cms i have index.php, where client must enter username and password. if they are correct, he'll moove to admin.php, where the cms is. but now hacker can enter to cms/admin.php, so my security now is awful. i know, that i can use $_SESSION variable. index.php - i can give some value to $_SESSION['success']: $_SESSION['success'] = TRUE, and in admin.php just verify it admin.php if($_SESSION['success'] == TRUE) { my script here... } else header("Location: index.php"); but i want to rich this effect without SESSION. could you give me an idea, how can i do it? thanks

    Read the article

  • Macros in C.... please give the solution

    - by Jungle_hacker
    suppose i declared a macro name anything, xyz() and now i am creating another macro xyz1() and referencing the 1st macro i.e xyz() in 2nd. finally i'll create another macro xyz2() and referencing 2nd macro in 3rd... now my question is is this correct(its executing without any problem)..? and macro xyz() is defined twice.... why its not giving error ? what is the solution..?

    Read the article

  • How do I determine how future-proof and stable a router is?

    - by Aarthi
    I mentioned in my last question that my wireless router had a bad habit of crashing. After consulting with the Super User chatroom, as well as my sysadmin, I've decided I may as well purchase a new router. However, I'm unsure how to evaluate all these tech specs that get touted about. The two things that seem to be the most important to me are: (1) keeping my router future-proof (as standards evolve and change), and (2) ensuring its stability. Unfortunately, I'm not sure what, exactly, I should be looking for in the tech specs or the item description that can give me a good idea of how stable or future-proof my decide will be. What should I look for? Can I determine stability without having to try the device out myself? Please note: I'm not a battle-hardened power user by any means, so I'll likely be reliant on the given firmware for my router. My last router lasted me like four years, so I mostly just want something that'll cover a 500 sqft apartment in New York with minimal crashing, so that I can watch Hulu in peace. And make Skype calls. If it helps, the router models that I'm currently decided between are this ASUS one and this LinkSys one.

    Read the article

  • Learning a new concept - write from scratch or use frameworks?

    - by Stu
    I have recently been trying to learn about MVVM and all of the associated concepts such as repositories, mediators, data access. I made a decision that I would not use any frameworks for this so that I could gain a better understanding of how everything worked. I’m beginning to wonder if that was the best idea because I have hit some problems which I am not able to solve, even with the help of Stack Overflow! Writing from scratch I still feel that you have a much better understanding of something when you have been in the guts of it than if you were at a higher level. The other side of that coin is that you are in the guts of something that you don't fully understand which will lead to bad design decisions. This then makes it hard to get help because you will create unusual scenarios which are less likely to occur when you working within the confines of a framework. I have found that there are plenty of tutorials on the basics of a concept but very few that take you all the way from novice to expert. Maybe I should be looking at a book for this? Using frameworks The biggest motivation for me to use frameworks is that they are much more likely to be used in the workplace than a custom rolled solution. This can be quite a benefit when starting a new job if it's one less thing you have to learn. I feel that there is much better support for a framework than a custom solution which makes sense; many more people are using the framework than the solution that you created. The level of help is much wider as well, from basic questions to really specific, detailed questions. I would be interested to hear other people's views on this. When you are learning something new, should you/do you use frameworks or not? Why? If it's a combination of both, when do you stop one and move on to the other?

    Read the article

  • What arguments can I use to "sell" the BDD concept to a team reluctant to adopt it?

    - by S.Robins
    I am a bit of a vocal proponent of the BDD methodology. I've been applying BDD for a couple of years now, and have adopted StoryQ as my framework of choice when developing DotNet applications. Even though I have been unit testing for many years, and had previously shifted to a test-first approach, I've found that I get much more value out of using a BDD framework, because my tests capture the intent of the requirements in relatively clear English within my code, and because my tests can execute multiple assertions without ending the test halfway through - meaning I can see which specific assertions pass/fail at a glance without debugging to prove it. This has really been the tip of the iceberg for me, as I've also noticed that I am able to debug both test and implementation code in a more targeted manner, with the result that my productivity has grown significantly, and that I can more easily determine where a failure occurs if a problem happens to make it all the way to the integration build due to the output that makes its way into the build logs. Further, the StoryQ api has a lovely fluent syntax that is easy to learn and which can be applied in an extraordinary number of ways, requiring no external dependencies in order to use it. So with all of these benefits, you would think it an easy to introduce the concept to the rest of the team. Unfortunately, the other team members are reluctant to even look at StoryQ to evaluate it properly (let alone entertain the idea of applying BDD), and have convinced each other to try and remove a number of StoryQ elements from our own core testing framework, even though they originally supported the use of StoryQ, and that it doesn't impact on any other part of our testing system. Doing so would end up increasing my workload significantly overall and really goes against the grain, as I am convinced through practical experience that it is a better way to work in a test-first manner in our particular working environment, and can only lead to greater improvements in the quality of our software, given I've found it easier to stick with test first using BDD. So the question really comes down to the following: What arguments can I use to really drive the point home that it would be better to use StoryQ, or at the very least apply the BDD methodology? Can you point me to any anecdotal evidence that I can use to support my argument to adopt BDD as our standard method of choice? What counter arguments can you think of that could suggest that my wish to convert the team efforts to BDD might be in error? Yes, I'm happy to be proven wrong provided the argument is a sound one. NOTE: I am not advocating that we rewrite our tests in their entirety, but rather to simply start working in a different manner for all future testing work.

    Read the article

  • F# performance vs Erlang performance, is there proof the Erlang's VM is faster?

    - by afuzzyllama
    I've been putting time into learning functional programming and I've come to the part where I want to start writing a project instead of just dabbling in tutorials/examples. While doing my research, I've found that Erlang seems to be a pretty powerful when it comes to writing concurrent software (which is my goal), but resources and tools for development aren't as mature as Microsoft development products. F# can run on linux (Mono) so that requirement is met, but while looking around on the internet I cannot find any comparisons of F# vs Erlang. Right now, I am leaning towards Erlang just because it seems to have the most press, but I am curious if there is really any performance difference between the two languages. Since I am use to developing in .NET, I can probably get up to speed with F# a lot faster than Erlang, but I cannot find any resource to convince me that F# is just as scalable as Erlang. I am most interested in simulation, which is going to be firing a lot of quickly processed messages to persistant nodes. If I have not done a good job with what I am trying to ask, please ask for more verification.

    Read the article

  • How to future-proof my touch-enabled web application?

    - by Rice Flour Cookies
    I recently went out and purchased a touch-screen monitor with the intention of learning how to program touch-enabled web applications. I had reviewed the MDN documentation about touch events, as well as the W3C specification. To get started, I wrote a very short test page with two event handlers: one for the mousedown event and one for the touchstart event. I fired up the web page in IE and touched the document and found that only the mousedown event fired. I saw the same behavior with Firefox, only to find out later that Firefox can be set to enable the touchstart event using about:config. When touch events are enabled, the touchstart event fires, but not mousedown. Chrome was even stranger: it fired both events when I touched the document: touchstart and mousedown, in that order. Only on my Android phone does it appear to be the case that only the touchstart event fires when I touch the document. I did a a Google search and ended up on two interesting pages. First, I found the page on CanIUse for touch events: http://caniuse.com/#feat=touch Can I Use clearly indicates that IE does not support touch events as of this writing, and Firefox only supports touch events if they are manually enabled. Furthermore, all four browsers I mentioned treat the touch in a completely different way. It boils down to this: IE: simulated mouse click Firefox with touch disabled: simulated mouse click Firefox with touch enabled: touch event Chrome: touch event and simulated mouse click Android: touch event What is more frustrating is that Google also found a Microsoft page called RethinkIE. RethinkIE brags about touch support in IE; as a matter of fact, one of their slogans is "Touch the Web". It links to a number of touch-based application. I followed some of these links, and as best I can tell, it's just like CanIUse described; no proper touch support; just simulated mouse clicks. The MDN (https://developer.mozilla.org/en-US/docs/Web/API/Touch) and W3C (http://www.w3.org/TR/touch-events/) documentation describe a far richer interface; an interface that doesn't just simulate mouse clicks, but keeps track of multiple touches at once, the contact area, rotation, and force of each touch, and unique identifiers for each touch so that they can be tracked individually. I don't see how simulated mouse clicks can ever touch the above described functionality, which, once again, is part of the W3C specification, although it is listed as "non-normative", meaning that a browser can claim to be standards-compliant without implementing it. (Why bother making it part of the standard, then?) What motivated my research is that I've written an HTML5 application that doesn't work on Android because Android doesn't fire mouse events. I'm now afraid to try to implement touch for my application because the browsers all behave so differently. I imagine that at some time in the future, the browsers might start handling touch similarly, but how can I tell how they might be handled in the future short of writing code to handle the behavior of each individual browser? Is it possible to write code today that will work with touch-enabled browsers for years to come? If so, how?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >