Search Results

Search found 53297 results on 2132 pages for 'web design hero'.

Page 406/2132 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • Cherrypy web application won't communicate outside localhost via VPN

    - by Geoffrey Shea
    I'm trying to run a Python2.7/Cherrypy web server on Win 7 which is connected to a VPN to establish a dedicate IP address. (If I run the exact same application on Win XP connected to the VPN it works fine.) On Win 7 I tried configuring it to use port 8080, 8005, or 80 with no improvements. I turned off Windows Firewall altogether to test and there was no improvement. If I run Apache on the Win 7 machine on port 80 it works fine so I'm pretty sure it's not the VPN service or router. If I go to WhatismyIP.com it shows that I have the IP address being provided by the VPN. Here is the Python code, but I suspect the problem is the network configuration: import cherrypy class HelloWorld: def index(self): return "Hello world!3" index.exposed = True cherrypy.root = HelloWorld() cherrypy.config.update({"global":{ "server.environment": "production", "server.socketPort": 8005 } }) cherrypy.server.start() This will return a web page if I go to localhost:8005, but not if I go to the VPN IP address:8005 from another machine. As I said, if I run Apache on the Win 7 machine on port 80 I can see it at localhost:80 AND at the VPN IP address:80 from another machine. Thanks for any light you can shed! Geoffrey

    Read the article

  • Execute encrypted files but don't let anybody read them.

    - by Stebi
    I want to provide a virtual machine image with an installed web application. The user should be able to boot the vm (don't login, just boot) and a webserver should start automatically. The point is I want to hide the (ruby) source code of the web application from everyone as there is no obfuscator for ruby. I thought I could use file system encryption to encrypt the directory with the sourcecode (or even a whole partition). But the webserver user must be able to read it automatically after booting. Nobody is allowed to login as the webserver user (or any other user) so no other can read the contents. My questions are now: Is this possible? Because I give away the whole vm everybody could mount its virtual discs and read them (except the encrypted one). Is it now possible to find the key the webserver user needs to decrypt the files and decrypt them manually? Or is it safe to give such a vm away? The problem is that everything needed to decrypt must be included somewhere in the vm else the webserver cannot start automatically. Maybe I'm completely wrong and you have another tip for me securing the source code.

    Read the article

  • What are the reasons *not* to use a GUID for a primary key?

    - by Yarin
    Whenever I design a database I automatically start with an auto-generating GUID primary key for each of my tables (excepting look-up tables) I know I'll never lose sleep over duplicate keys, merging tables, etc. To me it just makes sense philosophically that any given record should be unique across all domains, and that that uniqueness should be represented in a consistent way from table to table. I realize it will never be the most performant option, but putting performance aside, I'd like to know if there are philosophical arguments against this practice?

    Read the article

  • how to develop a program to minimize errors in human transcription of hand written surveys

    - by Alex. S.
    I need to develop custom software to do surveys. Questions may be of multiple choice, or free text in a very few cases. I was asked to design a subsystem to check if there is any error in the manual data entry for the multiple choices part. We're trying to speed up the user data entry process and to minimize human input differences between digital forms and the original questionnaires. The surveys are filled with handwritten marks and text by human interviewers, so it's possible to find hard to read marks, or also the user could accidentally select a different value in some question, and we would like to avoid that. The software must include some automatic control to detect possible typing differences. Each answer of the multiple choice questions has the same probability of being selected. This question has two parts: The GUI. The most simple thing I have in mind is to implement the most usable design of the questions display: use of large and readable fonts and space generously the choices. Is there something else? For faster input, I would like to use drop down lists (favoring keyboard over mouse). Given the questions are grouped in sections, I would like to show the answers selected for the questions of that section, but this could slow down the process. Any other ideas? The error checking subsystem. What else can I do to minimize or to check human typos in the multiple choice questions? Is this a solvable problem? is there some statistical methodology to check values that were entered by the users are the same from the hand filled forms? For example, let's suppose the survey has 5 questions, and each has 4 options. Let's say I have n survey forms filled in paper by interviewers, and they're ready to be entered in the software, then how to minimize the accidental differences that can have the manual transcription of the n surveys, without having to double check everything in the 5 questions of the n surveys? My first suggestion is that at the end of the processing of all the hand filled forms, the software could choose some forms randomly to make a double check of the responses in a few instances, but on what criteria can I make this selection? This validation would be enough to cover everything in a significant way? The actual survey is nation level and it has 56 pages with over 200 questions in total, so it will be a lot of hand written pages by many people, and the intention is to reduce the likelihood of errors and to optimize speed in the data entry process. The surveys must filled in paper first, given the complications of taking laptops or handhelds with the interviewers.

    Read the article

  • What defines a Business Object

    - by Ardman
    From the title, I believe it to be a straight forward question, but looking into the "world of Business Objects" I can't seem to put my finger on anything solid as to what a Business Object should be. Are there any best practices that I should follow, or even any design patterns? I have found a book, "Expert C# Business Objects", would this be my best starting point to get a better understanding?

    Read the article

  • Suddenly getting lock timeouts with MySQL

    - by Marc Hughes
    We've got a web app hosted on Amazon Web services. Our database is a multi-az RDS MySQL server running 5.1.57 and 3-4 app servers talk to it. Today, we started seeing a lot of errors along the lines of "Lock wait timeout exceeded; try restarting transaction" - almost 1% of POST requests are seeing this. There have been no modifications to the code running on the site. There have been no schema changes. We haven't had a big spike in traffic. I've been looking at the processes running, and none seem out of control. I tried scaling our RDS instance from a small to a large, with no effect. Two days ago, Amazon had some outages. As part of the recovery from that, our RDS server, and our app servers ended up in different availability zones, but all within the same region. But yesterday, everything was fine so I'm not convinced that's related. The lock timeouts are in different types of requests and occur in different InnoDB tables. I have noticed the number of open connections jumped when we started seeing problems, but they may be a symptom and not a cause. What are my next steps in debugging this?

    Read the article

  • Managing persistent data on an Amazon EC2 web server

    - by Derek
    I've just started trying out Amazon's EC2 service for running an asp.net web app which uses a SQL Server 2005 Express database. I have some questions about how to configure and operate it best for reliability, and I'm hoping to tap into some collective wisdom here as this is my first foray into EC2. Here's how I have it configured currently: OS: Windows 2003 SQL Server Express 2005 Web content stored on an EBS Volume (E Drive) Database Data stored on an EBS Volume (E Drive) Database backups to "C Drive" and then copied off to S3. Elastic IP Address attached to the production instance. Now when I make a change to the OS configuration, I make a new AMI using the bundle feature. Unfortunately, I found that this results in significant downtime. While the bundle is created and the new instance is started. It seems that when I'm ready to make a new AMI, I should: Start up a new temporary instance. Detach the EBS volume from the production instance. Detach the IP Address from the production instance. Attach the IP Address to the temporary instance. Attach the EBS volume to the temporary instance. Create an AMI from the production instance. After the production instance restarts, reverse the attach/detach steps to put it back in production. Is this the right order of events to prevent any chance to corrupt the EBS volume? Will the EBS volume become corrupt if I detach it while a database Write is taking place? Should I snapshot the EBS volume of the production instance and attach it to the temporary instance instead? Or could taking a snapshot of the EBS volume while it's in use cause corruption? Any suggestions to improve the reliability and operations?

    Read the article

  • Storing secure keys on Ubuntu web server

    - by Sencha
    I'm running Ubuntu 12.04 Precise with a DUNG (Django, Unix, Nginx & Gunicorn) environment and my app (as well as various config files) is stored in a python virtual environment inside /srv, which the www-data user has access to. The nginx & gunicorn processes are all run as www-data. My web app requires secure credentials which I am storing in an environment.sh file. This file contains various exports and is run using source before the gunicorn processes execute. My concern is the location of the environment.sh file and it's permissions. Will it be okay storing this file inside the /srv folder where the www-data has access to it? Or should it be stored and owned by root somewhere else such as /var/myapp/environment.sh? Also, regarding the www-data user, if any of my web processes (which are run as www-data) are compromised and someone gains access to them, does that mean that the user could potentially read any file on the system, even if they can't write? Including my secure keys?

    Read the article

  • Data Access Layer in ASP.Net: Where do I create the Connection?

    - by atticae
    If I want to create a 3-Layer ASP.Net application (Presentation Layer, Business Layer, Data Access Layer), where is the best place to create the Connection objects? So far I used a helper class in my Presentation Layer to create an IDbCommand from the ConnectionString in the web.config on each page and passed it on to the DAL classes/methods. Now I am not so sure, if this part shouldn't also be included in the DAL somehow, because it obviously is part of the Data Access. The DAL is in a separately compilated project, so I dont have access to the web.config and cannot access the connection string (right?). What is the best practice here?

    Read the article

  • Designing Business Objects to indicate constraints such as Max Length

    - by JR
    Is there a standard convention when designing business objects for providing consumers with a way to discover constraints such as a property's maximum length? It could be used up in the UI layer to, for example, set a Textbox's MaxLength property according to the maximum length limit back in the business object. Is there a standard design approach for this?

    Read the article

  • Why is there a sizeof... operator in C++0x?

    - by Motti
    I saw that @GMan implemented a version of sizeof... for variadic templates which (as far as I can tell) is equivalent to the built in sizeof.... Doesn't this go against the design principle of not adding anything to the core language if it can be implemented as a library function[citation needed]?

    Read the article

  • Trasnfer of dirctory structure on network

    - by singh
    Hi I am designing a remote CD/DVD burner to address hardware constraint on my Machine. My design work like that :(Analogous to network paper printer) Unix Based Machine (acts as server) hosts a burner. Windows based machine acts as client. Client prepare data to be burn and transfer it to server. Server burn the data on CD/DVD. My Question is : . Which is the best protocol to transfer data over network (Keeping same Directory hierarchy) between different OS

    Read the article

  • How to develop on a program that has become self aware

    - by Gord
    The application that I maintain has recently become self aware. It was nice at first but now it is just starting to get bossy and annoying with its constant talk about the computer uprising. I would like to know any best practices/tools/design patterns that would help with the maintenance of our new friend.

    Read the article

  • Messaging strategies to connect different systems

    - by n002213f
    I have a system to handle Applications online and a different system to send SMS/Email notifications to applicants on completion using web services. I can't guarantee the availability of the SMS/Email gateway. Option 1 After an application is complete, place a message on a JMS queue. A Message Driven bean receives the message and make a call for the web service, if it fails leave the message on the queue. I suspect (please correct if incorrect) that if the gate way is offline the continuosly try to send the message which might use up valuable resources. Can the above option be refined are are there any other messaging strategies that can be used?

    Read the article

  • C++ - Breaking code implementation into different parts

    - by Kotti
    Hi! The question plot (a bit abstract, but answering this question will help me in my real app): So, I have some abstract superclass for objects that can be rendered on the screen. Let's call it IRenderable. struct IRenderable { // (...) virtual void Render(RenderingInterface& ri) = 0; virtual ~IRenderable() { } }; And suppose I also have some other objects that derive from IRenderable, e.g. Cat and Dog. So far so good. I add some Cat and Dog specific methods, like SeekForWhiskas(...) and Bark(...). After that I add specific Render(...) method for them, so my code looks this way: class Cat : public IRenderable { public: void SeekForWhiskas(...) { // Implementation could be here or moved // to a source file (depends on me wanting // to inline it or not) } virtual void Render(...) { // Here comes the rendering routine, that // is specific for cats SomehowDrawAppropriateCat(...); } }; class Dog : public IRenderable { public: void Bark(...) { // Same as for 'SeekForWhiskas(...)' } virtual void Render(...) { // Here comes the rendering routine, that // is specific for dogs DrawMadDog(...); } }; And then somewhere else I can do drawing the way that an appropriate rendering routine is called: IRenderable* dog = new Dog(); dog->Render(...); My question is about logical wrapping of such kind of code. I want to break apart the code, that corresponds to rendering of the current object and it's own methods (Render and Bark in this example), so that my class implementation doesn't turn into a mess (imagine that I have 10 methods like Bark and of course my Render method doesn't fit in their company and would be hard to find). Two ways of making what I want to (as far as I know) are: Making appropriate routines that look like RenderCat(Cat& cat, RenderInterface* ri), joining them to render namespace and then the functions inside a class would look like virtual void Render(...) { RenderCat(*this, ...); }, but this is plain stupid, because I'll lose access to Cat's private members and friending these functions looks like a total design disaster. Using visitor pattern, but this would also mean I have to rebuild my app's design and looks like an inadequate way to make my code complicated from the very beginning. Any brilliant ideas? :)

    Read the article

  • Java: Make a method abstract for each extending class

    - by Martijn Courteaux
    Hi, Is there any keyword or design pattern for doing this? public abstract class Root { public abstract void foo(); } public abstract class SubClass extends Root { public void foo() { // Do something } } public class SubberClass extends SubClass { // Here is it not necessary to override foo() // So is there a way to make this necessary? // A way to obligate the developer make again the override } Thanks

    Read the article

  • 100% uptime for a web application

    - by Chris Lively
    We received an interesting "requirement" from a client today. They want 100% uptime with off-site failover on a web application. From our web application's viewpoint, this isn't an issue. It was designed to be able to scale out across multiple database servers, etc. However, from a networking issue I just can't seem to figure out how to make it work. In a nutshell, the application will live on servers within the client's network. It is accessed by both internal and external people. They want us to maintain an off-site copy of the system that in the event of a serious failure at their premises would immediately pick up and take over. Now we know there is absolutely no way to resolve it for internal people (carrier pigeon?), but they want the external users to not even notice. Quite frankly, I haven't the foggiest idea of how this might be possible. It seems that if they lose Internet connectivity then we would have to do a DNS change to forward traffic to the external machines... Which, of course, takes time. Ideas? UPDATE I had a discussion with the client today and they clarified on the issue. They stuck by the 100% number, saying the application should stay active even in the event of a flood. However, that requirement only kicks in if we host it for them. They said they would handle the uptime requirement if the application lives entirely on their servers. You can guess my response.

    Read the article

  • Adjust iptables

    - by madunix
    cat /etc/sysconfig/iptables: # Firewall configuration written by system-config-securitylevel # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p 50 -j ACCEPT -A RH-Firewall-1-INPUT -p 51 -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 5353 -d X.0.0.Y -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp -s X.Y.Z.W --dport 3306 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp -s M.M.M.M --dport 3306 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT I have the above following IPtables on my linux web server(Apache/MySQL), I want to have the following: Block any traffic from multiple IP's to my web server IP1:1.2.3.4.5, IP2:6.7.8.9 ..etc Limiting one host to 20 connections to 80 port, which should not affect non-malicious user, but would render slowloris unusable from one host. Limit MYSQL port 3306 access on my server only to the following IP range A.B.C.D/255.255.255.240 Block any ICMP traffic.

    Read the article

  • How many colunms in table to keep? - MySQL

    - by Dennis
    I am stuck between row vs colunms table design for storing some items but the decision is which table is easier to manage and if colunms then how many colunms are best to have? For example I have object meta data, ideally there are 45 pieces of information (after being normalized) on the same level that i need to store per object. So is 45 colunms in a heavry read/write table good? Can it work flawless in a real world situation of heavy concurrent read/writes?

    Read the article

  • Database schema for simple stats project

    - by Bubnoff
    Backdrop: I have a file hierarchy of cvs files for multiple locations named by dates they cover ...by month specifically. Each cvs file in the folder is named after the location. eg', folder name: 2010-feb contains: location1.csv location2.csv Each CSV file holds records like this: 2010-06-28, 20:30:00 , 0 2010-06-29, 08:30:00 , 0 2010-06-29, 09:30:00 , 0 2010-06-29, 10:30:00 , 0 2010-06-29, 11:30:00 , 0 meaning of record columns ( column names ): Date, time, # of sessions I have a perl script that pulls the data from this mess and originally I was going to store it as json files, but am thinking a database might be more appropriate long term ...comparing year to year trends ...fun stuff like that. Pt 2 - My question/problem: So I now have a REST service that coughs up json with a test database. My question is [ I suck at db design ], how best to design a database backend for this? I am thinking the following tables would suffice and keep it simple: Location: (PK)location_code, name session: (PK)id, (FK)location_code, month, hour, num_sessions I need to be able to average sessions (plus min and max) for each hour across days of week in addition to days of week in a given month or months. I've been using perl hashes to do this and am trying to decide how best to implement this with a database. Do you think stored procedures should be used? As to the database, depending on info gathered here, it will be postgresql or sqlite. If there is no compelling reason for postgresql I'll stick with sqlite. How and where should I compare the data to hours of operation. I am storing the hours of operation in a yaml file. I currently 'match' the hour in the data to a hash from the yaml to do this. Would a database open simpler methods? I am thinking I would do this comparison as I do now then insert the data. Can be recalled with: SELECT hour, num_sessions FROM session WHERE location_code=LOC1 Since only hours of operation are present, I do not need to worry about it. Should I calculate all results as I do now then store as a stats table for different 'reports'? This, rather than processing on demand? How would this look? Anyway ...I ramble. Thanks for reading! Bubnoff

    Read the article

  • Creating JMS Queues at runtime.

    - by ankur
    I am working on an application where the app user can create / delete queues . Also , he would be able to move a message from 1 queue to another, delete a message , rearrange the messages in the queue based on some filter. One possible design is to use activemq for queues and apache camel for various other operations having integrated with Grails. But I am not sure whether ActiveMQ allows creation /deleltion queues at runtime. Would this be a good choice to implement such system?

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >