Search Results

Search found 17770 results on 711 pages for 'repository design'.

Page 57/711 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Design question for WinForms (C#) app, using Entity Framework

    - by cdotlister
    I am planning on writing a small home budget application for myself, as a learning excercise. I have built my database (SQL Server), and written a small console application to interact with it, and test out scenarios on my database. Once I am happy, my next step would be to start building the application - but I am already wondering what the best/standard design would be. I am palnning on using Entity Framework for handling my database entities... then linq to sql/objects for getting the data, all running under a WinForms (for now) application. My plan (I've never used EF... and most of my development background is Web apps) is to have my database... with Entity Framework in it's own project.. which has the connection to the database. This project would expose methods such as 'GetAccount()', 'GetAccount(int accountId)' etc. I'd then have a service project that references my EF project. And on top of that, my GUI project, which makes the calls to my service project. But I am stuck. Lets say I have a screen that displays a list of Account types (Debit, Credit, Loan...). Once I have selected one, the next drop down shows a list of accounts I have that suite that account type. So, my OnChange event on my DropDown on the account type control will make a call to the serviceLayer project, 'GetAccountTypes()', and I would expect back a List< of Account Types. However, the AccountType object ... what is that? That can't be the AccountType object from my EF project, as my GUI project doesn't have reference to it. Would I have to have some sort of Shared Library, shared between my GUI and my Service project, with a custom built AccountType object? The GUI can then expect back a list of these. So my service layer would have a method: public List<AccountType> GetAccountTypes() That would then make a call to a custom method in my EF project, which would probably be the same as the above method, except, it returns an list of EF.Data.AccountType (The Entity Framework generated Account Type object). The method would then have the linq code to get the data as I want it. Then my service layer will get that object, and transform it unto my custom AccountType object, and return it to the GUI. Does that sound at all like a good plan?

    Read the article

  • Design pattern question: encapsulation or inheritance

    - by Matt
    Hey all, I have a question I have been toiling over for quite a while. I am building a templating engine with two main classes Template.php and Tag.php, with a bunch of extension classes like Img.php and String.php. The program works like this: A Template object creates a Tag objects. Each tag object determines which extension class (img, string, etc.) to implement. The point of the Tag class is to provide helper functions for each extension class such as wrap('div'), addClass('slideshow'), etc. Each Img or String class is used to render code specific to what is required, so $Img->render() would give something like <img src='blah.jpg' /> My Question is: Should I encapsulate all extension functionality within the Tag object like so: Tag.php function __construct($namespace, $args) { // Sort out namespace to determine which extension to call $this->extension = new $namespace($this); // Pass in Tag object so it can be used within extension return $this; // Tag object } function render() { return $this->extension->render(); } Img.php function __construct(Tag $T) { $args = $T->getArgs(); $T->addClass('img'); } function render() { return '<img src="blah.jpg" />'; } Usage: $T = new Tag("img", array(...); $T->render(); .... or should I create more of an inheritance structure because "Img is a Tag" Tag.php public static create($namespace, $args) { // Sort out namespace to determine which extension to call return new $namespace($args); } Img.php class Img extends Tag { function __construct($args) { // Determine namespace then call create tag $T = parent::__construct($namespace, $args); } function render() { return '<img src="blah.jpg" />'; } } Usage: $Img = Tag::create('img', array(...)); $Img->render(); One thing I do need is a common interface for creating custom tags, ie I can instantiate Img(...) then instantiate String(...), I do need to instantiate each extension using Tag. I know this is somewhat vague of a question, I'm hoping some of you have dealt with this in the past and can foresee certain issues with choosing each design pattern. If you have any other suggestions I would love to hear them. Thanks! Matt Mueller

    Read the article

  • Seeding repository Rhino Mocks

    - by ahsteele
    I am embarking upon my first journey of test driven development in C#. To get started I'm using MSTest and Rhino.Mocks. I am attempting to write my first unit tests against my ICustomerRepository. It seems tedious to new up a Customer for each test method. In ruby-on-rails I'd create a seed file and load the customer for each test. It seems logical that I could put this boiler plate Customer into a property of the test class but then I would run the risk of it being modified. What are my options for simplifying this code? [TestMethod] public class CustomerTests : TestClassBase { [TestMethod] public void CanGetCustomerById() { // arrange var customer = new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" } } }; var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetById(5)).Return(customer); // assert Assert.AreEqual(customer, repository.GetById(5)); } [TestMethod] public void CanGetCustomerByDifId() { // arrange var customer = new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" } } }; var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetCustomerByDifID("55")).Return(customer); // assert Assert.AreEqual(customer, repository.GetCustomerByDifID("55")); } [TestMethod] public void CanGetCustomerByLogin() { // arrange var customer = new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" } } }; var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetCustomerByLogin("tdude")).Return(customer); // assert Assert.AreEqual(customer, repository.GetCustomerByLogin("tdude")); } } Test Base Class public class TestClassBase { protected T Stub<T>() where T : class { return MockRepository.GenerateStub<T>(); } } ICustomerRepository and IRepository public interface ICustomerRepository : IRepository<Customer> { IList<Customer> FindCustomers(string q); Customer GetCustomerByDifID(string difId); Customer GetCustomerByLogin(string loginName); } public interface IRepository<T> { void Save(T entity); void Save(List<T> entity); bool Save(T entity, out string message); void Delete(T entity); T GetById(int id); ICollection<T> FindAll(); }

    Read the article

  • JavaScript Resource Management Design Pattern

    - by Adam
    As a web developer, a common problem I find myself tackling is waiting for something to load before doing something else. In particular, I often hide (using either display: none; or visibility: hidden; depending on the situation) elements while waiting for a background image or a CSS file to load. Consider this example from Last.FM. They overlay a semi-transparant PNG over each album art image so that it looks like it's inside a jewel-case. They let it load when it loads, so depending on your internet speed, you may see the art image by itself (without the overlay) temporarily. In this case, the album art looks fine without the jewel-case effect. But in similar situations, I have found that I don't want the user to see the site's design mangled as resources incrementally load. So, in rare cases I have hidden everything from the user until the whole kit and kaboodle has loaded. But this is often a pain to write out, and may force the user to wait for a pretty long time to see anything (besides "loading..." text). I can think of (and have used on occasion) some obvious solutions/compromises: Use some inline CSS so that as certain parts of the DOM load and render, they will immediately have the correct size/position/etc. Immediately render the navigation part of the site, so that if the user wanted to use the current page purely to get somewhere else, they don't have to wait for the rest to load. Load pixelated images first as placeholders for layout while lazy-loading higher quality images as replacements. Something quirky like using a cute animated gif to distract the user during a "loading..." phase. Show useful information as a reference while loading the full UI. (Something akin to Gmail Inbox Preview, etc.) (Sorry if my question was basically just asked and answered...) Despite all of these ideas, I still find myself hoping there are better ways of doing some of these things. So I guess what I'm looking for is some inspiration and/or any creative ways of dealing with this problem that you guys may have seen out in the wild.

    Read the article

  • hg access control to central repository

    - by andreas buykx
    We come from a subversion background where we have a QA manager who gives commit rights to the central repository once he has verified that all QC activities have been done. Me and a couple of colleagues are starting to use mercurial, and we want to have a shared repository that would contain our QC-ed changes. Each of the developers hg clones the repository and pushes his changes back to the shared repository. I've read the HG init tutorial and skimmed through the red bean book, but could not find how to control who is allowed to push changes to the shared repository. How would our existing model of QA-manager controlled commits translate to a mercurial 'central' repository?

    Read the article

  • SQLAlchemy unsupported type error - and table design issues?

    - by Az
    Hi there, back again with some more SQLAlchemy shenanigans. Let me step through this. My table is now set up as so: engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() students_table = Table('studs', metadata, Column('sid', Integer, primary_key=True), Column('name', String), Column('preferences', Integer), Column('allocated_rank', Integer), Column('allocated_project', Integer) ) metadata.create_all(engine) mapper(Student, students_table) Fairly simple, and for the most part I've been enjoying the ability to query almost any bit of information I want provided I avoid the error cases below. The class it is mapped from is: class Student(object): def __init__(self, sid, name): self.sid = sid self.name = name self.preferences = collections.defaultdict(set) self.allocated_project = None self.allocated_rank = 0 def __repr__(self): return str(self) def __str__(self): return "%s %s" %(self.sid, self.name) Explanation: preferences is basically a set of all the projects the student would prefer to be assigned. When the allocation algorithm kicks in, a student's allocated_project emerges from this preference set. Now if I try to do this: for student in students.itervalues(): session.add(student) session.commit() It throws two errors, one for the allocated_project column (seen below) and a similar error for the preferences column: sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding parameter 4 - probably unsupported type. u'INSERT INTO studs (sid, name, allocated_rank, allocated_project) VALUES (?, ?, ?, ?, ?, ?, ?)' [1101, 'Muffett,M.', 1, 888 Human-spider relationships (Supervisor id: 123)] If I go back into my code I find that, when I'm copying the preferences from the given text files, it actually refers to the Project class which is mapped to a dictionary, using the unique project id's (pid) as keys. Thus, as I iterate through each student via their rank and to the preferences set, it adds not a project id, but the reference to the project id from the projects dictionary. students[sid].preferences[int(rank)].add(projects[int(pid)]) Now this is very useful to me since I can find out all I want to about a student's preferred projects without having to run another check to pull up information about the project id. The form you see in the error has the object print information passed as: return "%s %s (Supervisor id: %s)" %(self.proj_id, self.proj_name, self.proj_sup) My questions are: I'm trying to store an object in a database field aren't I? Would the correct way then, be copying the project information (project id, name, etc) into its own table, referenced by the unique project id? That way I can just have the project id field for one of the student tables just be an integer id and when I need more information, just join the tables? So and so forth for other tables? If the above makes sense, then how does one maintain the relationship with a column of information in one table which is a key index on another table? Does this boil down into a database design problem? Are there any other elegant ways of accomplishing this? Apologies if this is a very long-winded question. It's rather crucial for me to solve this, so I've tried to explain as much as I can, whilst attempting to show that I'm trying (key word here sadly) to understand what could be going wrong.

    Read the article

  • Injecting dependency into entity repository

    - by Hubert Perron
    Is there a simple way to inject a dependency into every repository instance in Doctrine2 ? I have tried listening to the loadClassMetadata event and using setter injection on the repository but this naturally resulted in a infinite loop as calling getRepository in the event triggered the same event. After taking a look at the Doctrine\ORM\EntityManager::getRepository method it seems like repositories are not using dependency injection at all, instead they are instantiated at the function level: public function getRepository($entityName) { $entityName = ltrim($entityName, '\\'); if (isset($this->repositories[$entityName])) { return $this->repositories[$entityName]; } $metadata = $this->getClassMetadata($entityName); $customRepositoryClassName = $metadata->customRepositoryClassName; if ($customRepositoryClassName !== null) { $repository = new $customRepositoryClassName($this, $metadata); } else { $repository = new EntityRepository($this, $metadata); } $this->repositories[$entityName] = $repository; return $repository; } Any ideas ?

    Read the article

  • How to perform duplicate key validation using entlib (or DataAnnotations), MVC, and Repository pattern

    - by olivehour
    I have a set of ASP.NET 4 projects that culminate in an MVC (3 RC2) app. The solution uses Unity and EntLib Validation for cross-cutting dependency injection and validation. Both are working great for injecting repository and service layer implementations. However, I can't figure out how to do duplicate key validation. For example, when a user registers, we want to make sure they don't pick a UserID that someone else is already using. For this type of validation, the validating object must have a repository reference... or some other way to get an IQueryable / IEnumerable reference to check against other rows already in the DB. What I have is a UserMetadata class that has all of the property setters and getters for a user, along with all of the appropriate DataAnnotations and EntLib Validation attributes. There is also a UserEntity class implemented using EF4 POCO Entity Generator templates. The UserEntity depends on UserMetadata, because it has a MetadataTypeAttribute. I also have a UserViewModel class that has the same exact MetadataType attribute. This way, I can apply the same validation rules, via attributes, to both the entity and viewmodel. There are no concrete references to the Repository classes whatsoever. All repositories are injected using Unity. There is also a service layer that gets dependency injection. In the MVC project, service layer implementation classes are injected into the Controller classes (the controller classes only contain service layer interface references). Unity then injects the Repository implementations into the service layer classes (service classes also only contain interface references). I've experimented with the DataAnnotations CustomValidationAttribute in the metadata class. The problem with this is the validation method must be static, and the method cannot instantiate a repository implementation directly. My repository interface is IRepository, and I have only one single repository implementation class defined as EntityRepository for all domain objects. To instantiate a repository explicitly I would need to say new EntityRepository(), which would result in a circular dependency graph: UserMetadata [depends on] DuplicateUserIDValidator [depends on] UserEntity [depends on] UserMetadata. I've also tried creating a custom EntLib Validator along with a custom validation attribute. Here I don't have the same problem with a static method. I think I could get this to work if I could just figure out how to make Unity inject my EntityRepository into the validator class... which I can't. Right now, all of the validation code is in my Metadata class library, since that's where the custom validation attribute would go. Any ideas on how to perform validations that need to check against the current repository state? Can Unity be used to inject a dependency into a lower-layer class library?

    Read the article

  • Why can't I get into my repositories?

    - by ken1943
    Upgraded from 10.04 to 10.10 by mistake and am having problems with some of my programs. They either don't work or they disappear from view on the screen. The Repositories menu item in Synaptic Packages Manager is a case in point. The computer does nothing when I choose this item. I am locked out of unless I edit /etc/apt/sources.list. The rest of the application seems to work, however. Update Manager just flashes on screen then disappears without doing, as does Printing. How do I downgrade from a higher to a lower level repository? I need to downgrade from 10.10 to 10.04. Each time Ubuntu upgrades to 10.10 evil things happen and the operating system really gets messed up. I really do not want to have to erase and install my operating system all over again if I can help it.

    Read the article

  • Looking for a better design: A readonly in-memory cache mechanism

    - by Dylan Lin
    Hi all, I have a Category entity (class), which has zero or one parent Category and many child Categories -- it's a tree structure. The Category data is stored in a RDBMS, so for better performance, I want to load all categories and cache them in memory while launching the applicaiton. Our system can have plugins, and we allow the plugin authors to access the Category Tree, but they should not modify the cached items and the tree(I think a non-readonly design might cause some subtle bugs in this senario), only the system knows when and how to refresh the tree. Here are some demo codes: public interface ITreeNode<T> where T : ITreeNode<T> { // No setter T Parent { get; } IEnumerable<T> ChildNodes { get; } } // This class is generated by O/R Mapping tool (e.g. Entity Framework) public class Category : EntityObject { public string Name { get; set; } } // Because Category is not stateless, so I create a cleaner view class for Category. // And this class is the Node Type of the Category Tree public class CategoryView : ITreeNode<CategoryView> { public string Name { get; private set; } #region ITreeNode Memebers public CategoryView Parent { get; private set; } private List<CategoryView> _childNodes; public IEnumerable<CategoryView> ChildNodes { return _childNodes; } #endregion public static CategoryView CreateFrom(Category category) { // here I can set the CategoryView.Name property } } So far so good. However, I want to make ITreeNode interface reuseable, and for some other types, the tree should not be readonly. We are not able to do this with the above readonly ITreeNode, so I want the ITreeNode to be like this: public interface ITreeNode<T> { // has setter T Parent { get; set; } // use ICollection<T> instead of IEnumerable<T> ICollection<T> ChildNodes { get; } } But if we make the ITreeNode writable, then we cannot make the Category Tree readonly, it's not good. So I think if we can do like this: public interface ITreeNode<T> { T Parent { get; } IEnumerable<T> ChildNodes { get; } } public interface IWritableTreeNode<T> : ITreeNode<T> { new T Parent { get; set; } new ICollection<T> ChildNodes { get; } } Is this good or bad? Are there some better designs? Thanks a lot! :)

    Read the article

  • SVN multiple repositories in subfolders

    - by fampinheiro
    I'm using apache+svn apache config file: LoadModule dav_module modules/mod_dav.so LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so <Location /code> DAV svn SVNParentPath "c:/repositories" </Location> Imagine i have this file structure (in every t? i have one svn repository) c repositories uc1 0809v t1 t2 t3 0809i t1 t2 uc2 t1 t2 t1 I can access the repositories using: svn://domain.com/code/uc1/0809v/t1 svn://domain.com/code/uc1/0809v/t2 svn://domain.com/code/uc1/0809v/t3 I want to access them using the urls: http://domain.com/code/uc1/0809v/t1 http://domain.com/code/uc1/0809v/t2 http://domain.com/code/uc1/0809v/t3 and see the content of the repository in the browser. If i create the repository on the root of the svn folder i can see the repository (http://domain.com/code/t1) when i try the other urls i get the error Could not open the requested SVN filesystem My question is, It is possible to do a search in all subfolders looking for svn repositories?

    Read the article

  • SVN Merge two reposiories - what about the UUIDs?

    - by grant007
    Hi, This is my scenario: Originally had two seperate repositories, I need to merge these into one repository. I don't care too much about the history in these repositories. I created a new repository and can import the repositories no problem. The issue is with users working copies, I can ask them to switch --relocate them however there is the issue of the UUID which will be different for each original repository: I can only reassign the UUID in the new repository to match one of the original repositories. So what is the best method to resolve this issue? (I suspect/hope I am going about this wrong...) Any ideas appreciated! -Grant.

    Read the article

  • Using Mercurial repository inside a Git one: Feasible? Sane?

    - by Portablejim
    I am thinking on creating a Mercurial repository under a Git repository. e.g. ..../git-repository/directory/hg-repo/ The 2 repositories Is it possible to manage (keeping your sanity)? How similiar is it to this? I am a computer science student at University. I manage my work in Git, mainly as a distribution mechanism (after realizing that rsync fails when you have changes in more than one place) between my desktop and usb drive. I try use of Git as a VCS as I do work. I have finished a semester where I did a small group project to prepare for a larger group project next year. We had to use Subversion, and experienced the joys of a centralised VCS (including downtime). I tried to keep the subversion repository separate to my Git repository for the subject**, however it was annoying that it was seperate (not in the place where I store assignments). I therefore moved to using an Subversion repository inside my Git repository. As I think ahead (maybe I am thinking too far ahead) I realise that I will have to try and convince people to use a DVCS and Mercurial will probably be the one that is preferred (Windows and Mac GUI support, closer to Subversion). Having done some research into the whole Git vs Mercurial debate (however not used Mercurial at all) I still prefer Git. Can I have a Mercurial repository inside a Git one without going mad (or it ruining something)? Or is it something that I should not consider at all? (Or is it a bad question that should be deleted?) ** I think outside of Australia it is called a course

    Read the article

  • Signals and threads - good or bad design decision?

    - by Jens
    I have to write a program that performs highly computationally intensive calculations. The program might run for several days. The calculation can be separated easily in different threads without the need of shared data. I want a GUI or a web service that informs me of the current status. My current design uses BOOST::signals2 and BOOST::thread. It compiles and so far works as expected. If a thread finished one iteration and new data is available it calls a signal which is connected to a slot in the GUI class. My question(s): Is this combination of signals and threads a wise idea? I another forum somebody advised someone else not to "go down this road". Are there potential deadly pitfalls nearby that I failed to see? Is my expectation realistic that it will be "easy" to use my GUI class to provide a web interface or a QT, a VTK or a whatever window? Is there a more clever alternative (like other boost libs) that I overlooked? following code compiles with g++ -Wall -o main -lboost_thread-mt <filename>.cpp code follows: #include <boost/signals2.hpp> #include <boost/thread.hpp> #include <boost/bind.hpp> #include <iostream> #include <iterator> #include <string> using std::cout; using std::cerr; using std::string; /** * Called when a CalcThread finished a new bunch of data. */ boost::signals2::signal<void(string)> signal_new_data; /** * The whole data will be stored here. */ class DataCollector { typedef boost::mutex::scoped_lock scoped_lock; boost::mutex mutex; public: /** * Called by CalcThreads call the to store their data. */ void push(const string &s, const string &caller_name) { scoped_lock lock(mutex); _data.push_back(s); signal_new_data(caller_name); } /** * Output everything collected so far to std::out. */ void out() { typedef std::vector<string>::const_iterator iter; for (iter i = _data.begin(); i != _data.end(); ++i) cout << " " << *i << "\n"; } private: std::vector<string> _data; }; /** * Several of those can calculate stuff. * No data sharing needed. */ struct CalcThread { CalcThread(string name, DataCollector &datcol) : _name(name), _datcol(datcol) { } /** * Expensive algorithms will be implemented here. * @param num_results how many data sets are to be calculated by this thread. */ void operator()(int num_results) { for (int i = 1; i <= num_results; ++i) { std::stringstream s; s << "["; if (i == num_results) s << "LAST "; s << "DATA " << i << " from thread " << _name << "]"; _datcol.push(s.str(), _name); } } private: string _name; DataCollector &_datcol; }; /** * Maybe some VTK or QT or both will be used someday. */ class GuiClass { public: GuiClass(DataCollector &datcol) : _datcol(datcol) { } /** * If the GUI wants to present or at least count the data collected so far. * @param caller_name is the name of the thread whose data is new. */ void slot_data_changed(string caller_name) const { cout << "GuiClass knows: new data from " << caller_name << std::endl; } private: DataCollector & _datcol; }; int main() { DataCollector datcol; GuiClass mc(datcol); signal_new_data.connect(boost::bind(&GuiClass::slot_data_changed, &mc, _1)); CalcThread r1("A", datcol), r2("B", datcol), r3("C", datcol), r4("D", datcol), r5("E", datcol); boost::thread t1(r1, 3); boost::thread t2(r2, 1); boost::thread t3(r3, 2); boost::thread t4(r4, 2); boost::thread t5(r5, 3); t1.join(); t2.join(); t3.join(); t4.join(); t5.join(); datcol.out(); cout << "\nDone" << std::endl; return 0; }

    Read the article

  • Repository/Updating/Upgrading Issue

    - by Jakob
    The other day I was asked to upgrade from 13.04 to 13.10, at the time I was busy and hit no. I can not upgrade/update at this point, I get (error -11) or a 404 in terminal. In the software updater I get 'failed to download repository information.' I have tried changing my "Download From" setting to "Best" to "Main" and even a few other countries. And in "Other Software" I have tried disabling packages, but doesn't seem to help what so ever. I have tried several of the other commands to try and fix it, such as -fix missing or sudo apt-get update clean. P.S. This has also affected my thunderbird client, I cannot send/receive emails. Here is my error log when trying to upgrade: jakob@Skeletor:~$ sudo update-manager -d gpg: /tmp/tmpvejqvl/trustdb.gpg: trustdb created gpg: /tmp/tmpnayby6/trustdb.gpg: trustdb created Traceback (most recent call last): File "/usr/lib/python3/dist-packages/defer/__init__.py", line 483, in _inline_callbacks result = gen.throw(excep) File "/usr/lib/python3/dist-packages/UpdateManager/backend/InstallBackendAptdaemon.py", line 86, in commit True, close_on_done) File "/usr/lib/python3/dist-packages/defer/__init__.py", line 483, in _inline_callbacks result = gen.throw(excep) File "/usr/lib/python3/dist-packages/UpdateManager/backend/InstallBackendAptdaemon.py", line 158, in _run_in_dialog yield trans.run() aptdaemon.errors.TransactionFailed: Transaction failed: Package does not exist Package linux-headers-3.8.0-33 isn't available gpg: /tmp/tmp3kw_hl/trustdb.gpg: trustdb created. And let me throw in my sudo apt-get update too. Which this has been working variably too, but I don't know what to change my repositories to, and disabling does not effect: E: Some index files failed to download. They have been ignored, or old ones used instead. This is the short version, but looks exactly like this fairly consistently. Sometimes it downloads, sometimes it doesn't. Sometimes it tells me I have an update, and doesn't do anything. If it helps, I have recently had issues trying to install Samba as well, and connecting to the office's NAS Drive. Which works now, but I had to edit /etc/fstab/ and a few other things trying to get that to work as well. I understand it could also be a DNS problem, but this has been going on for a few days, as well as I've already tried changing my DNS server via my computer, however I am not allowed to alter the DNS on our company's router.

    Read the article

  • Nvidia GT218 repository drivers don't work

    - by user1042840
    I upgraded all packages with sudo apt-get upgrade command on my Ubuntu 10.04 box and I have Ubuntu 12.04 3.2.0-29-generic-pae now. I have two monitors and the following GPU: 01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [NVS 300] (rev a2) After upgrading to 12.04, I somehow lost my previous setup with one common workspace stretched across two monitors. When Ubuntu starts only one monitor is on. I can see the message on the active monitor: Not optimum mode. Recommended mode: 1680x1050 60Hz I used Nvidia proprietary drivers on 10.04 but now jockey-text --list shows: xorg:nvidia_current - NVIDIA accelerated graphics driver (Proprietary, Disabled, Not in use) xorg:nvidia_current_updates - NVIDIA accelerated graphics driver (post-release updates) (Proprietary, Enabled, Not in use) When I run sudo nvidia-settings it says You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run `nvidia-xconfig` as root), and restart the X server.' I typed nvidia-xconfig and rebooted, but jockey-text --list says the same after the reboot: Not in use. The same with nvidia-current - Enabled but Not in use. I also tried nvidia-173 but I ended up in tty immediately at startup so I removed it. I used to have some problems with Nvidia proprietary drivers on 10.04, I had to put paths to EDID files in /etc/X11/xorg.conf explicitly, but the resolution was as recommended and both monitors were working. If I understand correctly, nouveau drivers are used now by default because the resolution is still quite high, definitely not 800x600, xrandr showed: xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 66.0* 1280x1024 76.0 1024x768 76.0 800x600 73.0 640x480 73.0 640x400 0.0 320x400 0.0 1680x1050_60.00 (0x4f) 146.2MHz h: width 1680 start 1784 end 1960 total 2240 skew 0 clock 65.3KHz v: height 1050 start 1053 end 1059 total 1089 clock 60.0Hz However, colors seem a bit faded and blurry with nouveau drivers. Mouse cursor is invisible if it's placed inside Firefox window, and only one monitor is working. I like open source and if it's possible I'd prefer to use nouveau drivers but a few things should be fixed. I'm curious why nvidia-current drivers from the repository don't work now. I read it has something to do with the new X11 server in Ubuntu 12.04, is it true? How can I get it back to work?

    Read the article

  • Is there a way to put relationships/contraints into CSS?

    - by hekevintran
    In every design tool or art principle I've heard of, relationships are a central theme. By relationships I mean the thing you can do in Adobe Illustrator to specify that the height of one shape is equal to half the height of another. You cannot express this information in CSS. CSS hard-codes all values. Using a language like LESS that allows variables and arithmetic you can get closer to relationships but it's still a CSS variant. This inability in my mind is the biggest problem with CSS. CSS is supposed to be a language that describes the visual component of a Web page but it ignores relationships and contraints, ideas that are at the core of art. How possible is it to imagine a new Web design language that can express relationships and contraints that can be implemented in JavaScript using the current CSS properties?

    Read the article

  • How many repositories should I use to maintain my scripts under version control?

    - by romandas
    I mainly code small programs for myself, but recently, I've been starting to code for my peers on my team. To that end, I've started using a Mercurial repository to maintain my code in some form of version control (specifically, Tortoise-Hg on Windows). I have many small scripts, each in their own directory, all under one repository. However, while reading Joel's Hg Tutorial, I tried cloning a directory for one of my bigger scripts to create a "stable" version and found I couldn't do it because the directory wasn't itself a repository. So, I assume (and please correct me if I'm mistaken) that in order to use cloning properly, I'd have to create a repository for each script/directory. But.. would that be a "good idea" or a future maintenance nightmare waiting to happen? Succinctly, do I keep all my (unrelated) scripts in one repository, or should I create a repository for each? Or some unknown third option?

    Read the article

  • how to avoid returning mocks from a mocked object list

    - by koen
    I'm trying out mock/responsibility driven design. I seem to have problems to avoid returning mocks from mocks in the case of finder objects. An example could be an object that checks whether the bills from last month are paid. It needs a service that retrieves a list of bills for that. So I need to mock that service that retrieves the bills. At the same time I need that mock to return mocked Bills (since I don't want my test to rely on the correctness bill implementation). Is my design flawed? Is there a better way to test this? Or is this the way it will need to be when using finder objects (the finding of the bills in this case)?

    Read the article

  • Why implement DB connection pointer object as a reference counting pointer? (C++)

    - by DVK
    At our company one of the core C++ classes (Database connection pointer) is implemented as a reference counting pointer. To be clear, the objects are NOT DB connections themselves, but pointers to a DB connection object. The library is very old, and nobody who designed is around anymore. So far, nether I, nor any C++ experts in the company that I asked have come up with a good reason for why this particular design was chosen. Any ideas? It is introducing some problems (partially due to awful reference pointer implementation used), and I'm trying to understand if this design actually has some deep underlying reasons? The usage pattern these days seems to be that the DB connection pointer object is returned by a DB connection manager class, and it's somewhat unclear whether DB connection pointers were designed to be able to be used independently of DB connection manager.

    Read the article

  • Plan variable and call dependencies

    - by Gerenuk
    I'd like to write down the design of my program to understand the dependencies and calls better. I know there are class diagrams which show inheritance and attribute variables. However I'd also like to document the input parameters to method functions and in particular which calls the methods function executes inside (e.g. on the input parameters). Also sometimes it might be useful to show how actual objects are connected (if there is a standard structure). This way I can have a better understanding of the modules and design before starting to program. Can you suggest a method to do this software design? It should be one-to-one to programming code structure so that I really notice all quirks beforehand (instead of high-level design where thing are hard to implement without further work). Maybe some special diagram or tool or a combination? It is static dependency and call design rather than time dependent execution monitoring. (I use Python if you have any specialized recommendations).

    Read the article

  • Sending object C from class A to class B

    - by user278618
    Hi, I can't figure out how to design classes in my system. In classA I create object selenium (it simulates user actions at website). In this ClassA I create another objects like SearchScreen, Payment_Screen and Summary_Screen. # -*- coding: utf-8 -*- from selenium import selenium import unittest, time, re class OurSiteTestCases(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 5555, "*chrome", "http://www.someaddress.com/") time.sleep(5) self.selenium.start() def test_buy_coffee(self): sel = self.selenium sel.open('/') sel.window_maximize() search_screen=SearchScreen(self.selenium) search_screen.choose('lavazza') payment_screen=PaymentScreen(self.selenium) payment_screen.fill_test_data() summary_screen=SummaryScreen(selenium) summary_screen.accept() def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main() It's example SearchScreen module: class SearchScreen: def __init__(self,selenium): self.selenium=selenium def search(self): self.selenium.click('css=button.search') I want to know if there is anything ok with a design of those classes?

    Read the article

  • Writing Great Software

    - by 01010011
    Hi, I'm currently reading Head First's Object Oriented Analysis and Design. The book states that to write great software (i.e. software that is well-designed, well-coded, easy to maintain, reuse, and extend) you need to do three things: Firstly, make sure the software does everything the customer wants it to do Once step 1 is completed, apply Object Oriented principles and techniques to eliminate any duplicate code that might have slipped in Once steps 1 and 2 are complete, then apply design patterns to make sure the software is maintainable and reusable for years to come. My question is, do you follow these steps when developing great software? If not, what steps do you usually follow inorder to ensure it's well designed, well-coded, easy to maintain, reuse and extend?

    Read the article

  • What is the right way to implement communication between java objects?

    - by imoschak
    I'm working on an academic project which simulates a rather large queuing procedure in java. The core of the simulator rests within one package where there exist 8 classes, each one implementing a single concept. Every class in the project follows SRP. These classes encapsulate the behavior of the simulator and inter-connect every other class in the project. The problem that has arisen is that most of these 8 classes are, as is logical i think, tightly coupled and each one has to have working knowledge of every other class in this package in order to be able to call methods from it when needed. The application needs only one instance of each class so it might be better to create static fields for each class in a new class and use that to make calls -instead of preserving a reference in each class for every other class in the package (which I'm certain that is incorrect)-, but is this considered a correct design solution? or is there a design pattern maybe that better suits my needs?

    Read the article

  • The right way to implement communication between java objects

    - by imoschak
    I'm working on an academic project which simulates a rather large queuing procedure in java. The core of the simulator rests within one package where there exist 8 classes each one implementing a single concept. Every class in the project follows SRP. These classes encapsulate the behavior of the simulator and inter-connect every other class in the project. The problem that I has arisen is that most of these 8 classes are, as is logical i think, tightly coupled and each one has to have working knowledge of every other class in this package in order to be able to call methods from it when needed. The application needs only one instance of each class so it might be better to create static fields for each class in a new class and use that to make calls -instead of preserving a reference in each class for every other class in the package (which I'm certain that is incorrect)-, but is this considered a correct design solution? or is there a design pattern maybe that better suits my needs?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >