Hi,
I can't figure out how to design classes in my system.
In classA I create object selenium (it simulates user actions at website).
In this ClassA I create another objects like SearchScreen, Payment_Screen and Summary_Screen.
# -*- coding: utf-8 -*-
from selenium import selenium
import unittest, time, re
class OurSiteTestCases(unittest.TestCase):
def setUp(self):
self.verificationErrors = []
self.selenium = selenium("localhost", 5555, "*chrome", "http://www.someaddress.com/")
time.sleep(5)
self.selenium.start()
def test_buy_coffee(self):
sel = self.selenium
sel.open('/')
sel.window_maximize()
search_screen=SearchScreen(self.selenium)
search_screen.choose('lavazza')
payment_screen=PaymentScreen(self.selenium)
payment_screen.fill_test_data()
summary_screen=SummaryScreen(selenium)
summary_screen.accept()
def tearDown(self):
self.selenium.stop()
self.assertEqual([], self.verificationErrors)
if __name__ == "__main__":
unittest.main()
It's example SearchScreen module:
class SearchScreen:
def __init__(self,selenium):
self.selenium=selenium
def search(self):
self.selenium.click('css=button.search')
I want to know if there is anything ok with a design of those classes?
Most websites have content but many websites are transactional - is there an RSS-like scheme where a website can syndicate the "actions/transactions" their website provides?
I'm building a website for a student organization I am involved in at my college. Most of the site will be static - i.e. won't change from year-to-year, but certain pieces will. I am high-tech, but most of the others aren't, and I am graduating in the spring. So how should I go about building the website so as to allow those that take over in subsequent years to edit information?
Examples:
Events: I already plan on using a Google calendar for this
Officers: There will be profiles/pictures for all the officers on the web page
Connections: Partnerships with other organizations that we have currently, but may not in future, or may add more in future
Should I use some form of CMS (Content Management System)? If so, how restrictive are they (e.g. Drupal) to what you can build and then how easy is it to edit. What other ways could I make a very nice-looking website but allow certain pieces to be edited later?
Hi,
I'm currently reading Head First's Object Oriented Analysis and Design. The book states that to write great software (i.e. software that is well-designed, well-coded, easy to maintain, reuse, and extend) you need to do three things:
Firstly, make sure the software does everything the customer wants it to do
Once step 1 is completed, apply Object Oriented principles and techniques to eliminate any duplicate code that might have slipped in
Once steps 1 and 2 are complete, then apply design patterns to make sure the software is maintainable and reusable for years to come.
My question is, do you follow these steps when developing great software? If not, what steps do you usually follow inorder to ensure it's well designed, well-coded, easy to maintain, reuse and extend?
I'm working on an academic project which simulates a rather large queuing procedure in java. The core of the simulator rests within one package where there exist 8 classes, each one implementing a single concept. Every class in the project follows SRP. These classes encapsulate the behavior of the simulator and inter-connect every other class in the project.
The problem that has arisen is that most of these 8 classes are, as is logical i think, tightly coupled and each one has to have working knowledge of every other class in this package in order to be able to call methods from it when needed. The application needs only one instance of each class so it might be better to create static fields for each class in a new class and use that to make calls -instead of preserving a reference in each class for every other class in the package (which I'm certain that is incorrect)-, but is this considered a correct design solution? or is there a design pattern maybe that better suits my needs?
I am looking fr someone to make me two website templates for my site for free.
Here is a quick design of what I want:(Took me 2 minutes in Paint)
http:/ /i50.tinypic.com/33p9aut.jpg (You have to push backspace on the first link to join up the http:/ and the other /)and http://i50.tinypic.com/2qmogoo.jpg
Email me at [email protected] or [email protected] for more information
I'm working on an academic project which simulates a rather large queuing procedure in java. The core of the simulator rests within one package where there exist 8 classes each one implementing a single concept. Every class in the project follows SRP. These classes encapsulate the behavior of the simulator and inter-connect every other class in the project. The problem that I has arisen is that most of these 8 classes are, as is logical i think, tightly coupled and each one has to have working knowledge of every other class in this package in order to be able to call methods from it when needed. The application needs only one instance of each class so it might be better to create static fields for each class in a new class and use that to make calls -instead of preserving a reference in each class for every other class in the package (which I'm certain that is incorrect)-, but is this considered a correct design solution? or is there a design pattern maybe that better suits my needs?
I have the following setup in my project:
public class WebApiApplication : System.Web.HttpApplication
{
public static ISessionFactory SessionFactory { get; private set; }
public WebApiApplication()
{
this.BeginRequest += delegate
{
var session = SessionFactory.OpenSession();
CurrentSessionContext.Bind(session);
};
this.EndRequest += delegate
{
var session = SessionFactory.GetCurrentSession();
if (session == null)
{
return;
}
session = CurrentSessionContext.Unbind(SessionFactory);
session.Dispose();
};
}
protected void Application_Start()
{
AreaRegistration.RegisterAllAreas();
FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
RouteConfig.RegisterRoutes(RouteTable.Routes);
BundleConfig.RegisterBundles(BundleTable.Bundles);
var assembly = Assembly.GetCallingAssembly();
SessionFactory = new NHibernateHelper(assembly, Server.MapPath("/")).SessionFactory;
}
}
public class PositionsController : ApiController
{
private readonly ISession session;
public PositionsController()
{
this.session = WebApiApplication.SessionFactory.GetCurrentSession();
}
public IEnumerable<Position> Get()
{
var result = this.session.Query<Position>().Cacheable().ToList();
if (!result.Any())
{
throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound));
}
return result;
}
public HttpResponseMessage Post(PositionDataTransfer dto)
{
//TODO: Map dto to model
IEnumerable<Position> positions = null;
using (var transaction = this.session.BeginTransaction())
{
this.session.SaveOrUpdate(positions);
try
{
transaction.Commit();
}
catch (StaleObjectStateException)
{
if (transaction != null && transaction.IsActive)
{
transaction.Rollback();
}
}
}
var response = this.Request.CreateResponse(HttpStatusCode.Created, dto);
response.Headers.Location = new Uri(this.Request.RequestUri.AbsoluteUri + "/" + dto.Name);
return response;
}
public void Put(int id, string value)
{
//TODO: Implement PUT
throw new NotImplementedException();
}
public void Delete(int id)
{
//TODO: Implement DELETE
throw new NotImplementedException();
}
}
I am not sure if this is the recommended way to insert the session into the controller.
I was thinking about using DI but i am not sure how to inject the session that is opened and binded in the BeginRequest delegate into the Controllers constructor to get this
public PositionsController(ISession session)
{
this.session = session;
}
Question: What is the recommended way to use NHiberante sessions in asp.net mvc/web api ?
Hi,
I've been given a problem to fix, and I initially thought of .htaccess files, except for one thing, I quickly realized it's an IIS server. Is it possible to allow a webmaster the ability to modify the virtual directories using web.config files in the same way you can using .htaccess files? If so, any ideas on where I can find details on how this is done that I can communicate with the end client? We want to be able to do this without having to provide access to the IIS console to the webmaster.
An example of the desired change is:
http://FQDN/Careers/Careers.aspx?locale=en-ca&uid=Careers
have http:FQDN/careers point to the above, but modified/added/removed by the end user using web.config
I've been working with Windows Azure and Amazon Web Services EC2 for a good many months now (almost getting to the years range) and I've seen something over and over that seems troubling.
With AWS & Linux I commonly get instance startup times with EC2 around the 1-3 minute range.
With AWS & Windows OS on an EC2 instance it often takes 10-20 minutes.
With Windows Azure Web or Service Role I often get anywhere from 6-30 minutes waiting for a role to startup. I assume of course this involves booting up a windows instance somewhere in the fabric.
I know there has always been tons of FUD about windows vs. Linux, but I'd really like to know why it is that Windows 08 or 03 boots so much slower in the cloud than Linux. Any specific technical information regarding this would be greatly appreciated! Thanks.
I've been using the Picasa "sync to web" feature but recently noticed that several folders, with a lot of synced photos and videos inside them, lost their synced status as soon as I moved them to another location on the disk (not through the Picasa "move folder" command).
These folders now still appear with the green arrow indicating that their contents were uploaded, but they lost the blue sync icon they previously had (and are no longer syncing...).
If I try to reactivate the "sync to web" option for these folders, Picasa starts re-uploading ALL of their contents. This is absurd.. and would take ages to complete.
Is there any way I can somehow get Picasa to recognize these moved folders as the counterpart folder of an existing online folder for sync purposes?
To serve static content of a directory over http, one can simply navigate to that directory and type:
python -m SimpleHTTPServer 11111
which will start a http server on port 11111.
This hack is nice because it requires zero-config: no stand-alone web server, no config files at all.
Is it possible to extend this example, or have an alternate way to achieve this goal, but also have CGI support?
The final goal is to have a quick and lazy way of serving a web site from a certain directory. The site has static content (HTML pages, images), but also a CGI script. The CGI script must work properly when accessed via browser.
Of course I could setup a virtual host in apache, allow CGI inside it etc. But that's not a zero-config approach.
I'm using IIS6 Manager to setup the SMTP service on Windows Server 2008 Web Edition.
There seems to be a conflict (port 25?) which means that I cannot start and stop the Default SMTP server within IIS6. I can start and stop it with the services.msc snap in and this is reflected in state of the SMTP server in IIS6 manager.
I'm worried that none of the settings I want to get at within IIS6 (logging, authentication etc..) are having any effect. None of these settings are available within IIS7 in Web Edition.
I have an SSH+Samba server so people can access its files from anywhere on the network.
I thought it would be also interesting to provide access through a web interface, so they can access the files even when they don't have access to the VPN or a Samba/SSH client. Something like the Ubuntu One or Dropbox web interface.
The http server could be on the same machine as the SSH+Samba, so it should just provide access to local files and some way to login with their username/password.
Someone knows any software like this?
Using Google Chrome and Mozilla Firefox browsers on Ubuntu 9.10, I am unable to get any sound out from Java (version 6 update 15) on Runescape or WebSDR.
I'm only interested in getting WebSDR working and Runescape was the only other web applet I knew would have sound.
Sound does work in a test applet I downloaded when run from the command line so it seems to be a web specific issue.
Anyone else encountered or solved this or a similar issue? Are there any better applets out there that I can use to test my sound?
Hello,
I have something of a difficult situation : our company has a webserver in a remote data center that's, at the moment, only accessible by SSH and the firewall is not easily modifiable because the techs at the data center are unreliable and unreachable lately (not my choice of data center, and switching is not an option at the moment). Are there any browsers or plugins out there that will let me browse over an SSH connection ? I can browse with links and lynx on the SSH command line, but that doesn't give me access to various functionality I need, and it's too hard to find things in the web application running on a Tomcat server on the box that I need access to. Does anybody have any suggestions ? We're already working on getting direct access to the web application by having the firewall opened up, but I need something better in the mean time.
Hi, What are the main guidelines to setting up a user account on a Linux machine for a web app?
In my case it is a Rails application that does file management.
First thing I can think of is to limit access rights to only the directories it needs. But how exactly should I go about this? Setup rights through a user group or a through the user's ownership of those directories. I have very little experience in user rights management.
What else do I need to consider? I've heard of ACL's and SELinux, do I need to look into any of these to guaranty decent security for my simple web app?
Any advice about this and anything not mentioned welcomed, Thanks, Max.
I will be using Ubuntu.
Hi,
I use this great web-clipping and note taking app called Evernote on my Windows machine.
However, there's no Linux version of Evernote (doesn't work properly in Wine).
I would like to get some suggestions for something with similar capabilities that runs on Linux/Ubuntu.
Specifically I need to be able to select parts of a web page in Firefox, and press some key combination, to save that clip to disk, in some sort of searchable database
The clip needs to have pictures and basic text formatting, anything extra is unnecessary
I also need to be able to create empty note or edit existing one.
Storing the notes on a local machine only is fine - I don't need the sync features of Evernote
My boss quite regularly has to demo our web application to clients in a situation with no wifi available and sketchy 3G access, quite often the 3G lets him down.
I have considered setting a copy of our server up in a virtual machine on his laptop so he could demo it offline, but I fear this will just introduce more headaches when he forgets how to boot the VM up.
What I'd ideally like is an app that records you logging into a web app, saves copies of all the pages and ties the links and buttons you click up to offline copies of the pages it saves. So you could run through the demonstration you're going to give and have it cache the pages. When you then click the same buttons and links in offline mode it will present the relevant offline pages.
Does such a thing exist? Can anyone recommend any alternative solutions to this problem?
Thanks,
Anthony
I want to use Wget to save single web pages (not recursively, not whole sites) for reference. Much like Firefox's "Web Page, complete".
My first problem is: I can't get Wget to save background images specified in the CSS. Even if it did save the background image files I don't think --convert-links would convert the background-image URLs in the CSS file to point to the locally saved background images.
Firefox has the same problem.
My second problem is: If there are images on the page I want to save that are hosted on another server (like ads) these wont be included. --span-hosts doesn't seem to solve that problem with the line below.
I'm using:
wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off http://domain.tld/webpage.html