Search Results

Search found 27047 results on 1082 pages for 'multiple projects'.

Page 81/1082 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Changing Text in Visio Org Chart Shape Changes Multiple Shapes' Text

    - by Eric
    I have inherited an organizational chart that was created in Visio 2003. I am updating it with Visio 2007. When changing the text in one shape, such as a person's title, multiple shapes nearby change their text to the same. For example, if I change Bob's title from Programmer to Programmer/DBA then Wendy's text will change to "Bob - Programmer/DBA". Some changes update three or four other boxes. Some changes will only update one box. My thought is the originator copied or duplicated the one box to create multiple boxes and it created some type of link between them. How do I remove this link? Thanks!

    Read the article

  • archive all messages account on exchange 2003 receiving multiple copies

    - by aeolist
    Hello everyone I am using microsoft exchange 2003 on windows 2003 small business server. For the past month or so, the archiver account has been receiving multiple copies (about 15 or so) of each and every email. edit: All copies share the same Message ID. The machine that hosts exchange is updated religiously and an antivirus scan is run on a daily basis. So would anyone please have any ideas about how to deal with this and furthermore, how i would be able to delete the multiple copies of emails from outlook 2003 inbox. I will edit the entry, answering any questions or updating on my efforts Thanks in advance

    Read the article

  • Mutliple VMs for Tomcat cluster vs Multiple Tomcat instances on one physical box

    - by Greymeister
    I'm working on a project that will be implemented into production using a cluster of Apache Tomcat instances and I'm looking for the best Hardware/OS solutions and VMs have come up as one option. I have run ESXi/ESX instances before for development and testing, but I'm curious for a hosting environment if having multiple VMs is actually worse than just configuring a server to host multiple instances of Tomcat. These are my guesses: Pros for VMWare Easier Maintenance/Backup for individual VMs (VMWare makes this easy) Can remote login to individual VMs without having to give host access (security?) Easier way to re-purpose machine for OS/Hardware changes Pros for running on one Physical Machine Overhead of only one OS (also no VMWare footprint) Update OS/security changes once One less administrative layer (No VM expertise required) I'm curious if anyone has any other ideas about what the benefits would be for either option.

    Read the article

  • Multiple Selections from Dropdown in Word 2011 Form

    - by BR Hudgens
    Anyone know if there is a way to setup a Word form to allow the user to select multiple items from a dropdown list that are then displayed? For example, I need to setup a form to identify applicability of specific procedures to various levels of our organization. Since the listing of all departments under all divisions can be pretty lengthy, I was hoping for the ability to check off a division and then have a dropdown list on the same line to select the applicable departments (so the selections are specific to that division and not all listed on the form if that division is not applicable). First, I'm not seeing how to create a dropdown that would allow multiple selections. Beyond that, I don't know if that form element alone would then display all selected items or if that would require another element of some type. Anyone have any suggestions on this? Thanks in advance!

    Read the article

  • Share history in multiple zsh shell

    - by michael
    I am trying to setup zsh so that it shares command history between different zsh sessions: in multiple tabs in multiple gnome-terminals in different screen sessions I have put this in .zshrc #To save every command before it is executed (this is different from bash's history -a solution): setopt inc_append_history #To retrieve the history file everytime history is called upon. setopt share_history but that does not work. e.g. I type 1 command: gedit afile and then I go to and zsh and type history. I don't see gedit afile. output of 'setopt' is % setopt nohistbeep histexpiredupsfirst histfindnodups histignorealldups histignoredups histignorespace histnostore histreduceblanks histsavenodups histverify incappendhistory interactive monitor promptsubst sharehistory shinstdin zle How can I achieve this?

    Read the article

  • Linux - Multiple service statuses with one command

    - by Jimbo
    I'm trying to retrieve a list of multiple service statuses in Unix. I'm using the service command: man page. The statuses all start with the transmission-daemon string, for example. I require the ability to list multiple services' statuses, with a single command. Here is what I'm currently trying (and failing) with: Here I'm trying to grab a list of statuses using grep. service $(ls /etc/init.d | grep "transmission-daemon") status Here I'm trying to list all statuses, and then grep for them. service --status-all | grep "transmission-daemon" This produces the following, which isn't much help: How can I effectively achieve what I require with a single command, so that I can then continue piping to awk for further customisation? Desired example output: transmission-daemon started transmission-daemon2 stopped transmission-daemon3 started

    Read the article

  • Single NFS folder shared across multiple clients

    - by parthi_for_tech
    I'm trying to mount a single NFS folder from server say "/share/folder" to multiple clients up to 32 clients, and the clients tries to access the folder and create files. The problem I'm facing is that when I execute the write command I see only one client is able to access the folder the remaining clients are blocked and not able to proceed to write. So, is whatever I'm trying to do above is correct? Can we write/read files from the same folder on multiple clients? if yes how can I do it prallel? Kindly advice! Thanks

    Read the article

  • Looking for a router with multiple WAN ports and load balancing

    - by Cyrcle
    I'm going to be moving in a few months. The location I'm moving to is great except it's on a road with very few people, so the internet access option is limited to DSL at 1.6Mbps down, 384kbps up. This is much slower than I'm used to. One option is to get at least two of the DSL lines. There's also good possibility that I'll be able to get WiMax or similar. I've been looking around a bit and it seems like what I need is a load balancing router with multiple WAN ports. Can anyone recommend some good ones? I could also go with a small power efficient Linux box with multiple NICs. What would be good software for that? It'd need to be able to handle around 10Mbps. Thanks for any help

    Read the article

  • Cisco ASA 5505 inside interface multiple ip addresses

    - by Oneiroi
    I have an issue this morning where I want to be able to assign multiple ip addresses to the inside interface to facilitate an ip range migration for an office. Namely from a 192.168.1.x range to the new range, with the minimum of interruption for those working in the office. (New DHCP leases will use the new range, whilst those still on the 192.168.1.x range can continue to work until their lease is renewed). However I can not for the life of me figure out how to achieve this, trying to create multiple interfaces for the job leads to complaints about the license only allowing 2 active interfaces. Any suggestions? thanks in advance.

    Read the article

  • Multiple public keys for one user

    - by Russell
    This question is similar to SSH public key authentication - can one public key be used for multiple users? but it's the other way around. I'm experimenting on using ssh so any ssh server would work for your answers. Can I have multiple public keys link to the same user? What are the benefits of doing so? Also, can different home directories be set for different keys used (all of which link to the same user)? Please let me know if I'm unclear. Thanks.

    Read the article

  • wildcard host name bindings for multiple subdomains in multiple sites on IIS7 with a single IP address

    - by orca
    Situation: I have a single windows 2008 server with a single public IP address. I have multiple domains with wildcard A records pointing to the single IP address. I need each domain to be hosted by a different web site. (i.e. www.domain1.com by site domain1site) I need domain1.com to act like www.domain1.com I need each site to be able to have multiple subdomains (i.e. www.domain1.com, abc.domain1.com, xyz.domain1.com) Not relevant yet here it goes, I plan to handle each subdomain by a different application hosted in the same site (i.e. application /xyz in domain1site) However I found out that IIS7 does not support creating web sites with wildcard host name binding and setting it without any subdomain (i.e. domain1.com) does not work, even for www.domain1.com. Is there a simple solution? Does any IIS Extension like Application Request Routing provide such capability?

    Read the article

  • Audio switch with multiple 3.5mm input & outputs

    - by David Nguyen
    I've been searching for a device that simple allows me to pick one input and one output from multiple input/outputs. I thought this would be a called a switch but I can only find ones with one input. Is there such a device that can do this? I will be attaching various devices, i.e. multiple console sound, PC & Laptop inputs and outputting to my speakers or headphones. I'm looking for something small and simple. All inputs and outputs are 3.5mm.

    Read the article

  • Synergy configuration with multiple X screens

    - by Rob Drimmie
    I'm having a problem figuring out how to configure synergy to behave on a system with multiple X windows. On my desktop I am running Ubuntu 10.04 LTS. I have two monitors, setup as separate X screens by preference as well as to enable me to rotate the left-hand monitor. I also have a laptop, which I have on the desk in front of me, lower than the other two monitors. I have a very simple synergy.conf: section: screens desktop: laptop.local: end section: links desktop: down = laptop laptop: up = desktop end It works, but on the desktop only on whichever screen I run synergys from in terminal (I haven't set it up to run at startup yet because I've been playing with the configuration). I can't find any information how to reference multiple screens on one system, and would appreciate any help.

    Read the article

  • Samba+Windows: Allow multiple connections by different users?

    - by rgoytacaz
    Hello there, I have a machine running Ubuntu with Samba that I use to share stuff with my family's Windows machines in our local network. Currently they access a share for movies/music/etc with one user. I want to connect them to another share as a different user (for example, user "goytacaz"). When I try connecting to this new share, Windows gives me "Error 1219" and complains about multiple connections by the same user. How do I get my machine to accept multiple connections by the same user?

    Read the article

  • Multiple Windows Desktop areas for Full-Screen applications

    - by arootbeer
    Is it possible to run multiple instances of Windows Explorer within a single user session, or configure multiple desktops that are portions of a screen? I don't know the best way to describe what I want to achieve, but here's a picture of what I've got: I've got a 4 monitor setup, 3 portrait and one landscape, and I am normally running a number of RDP sessions, outlook, chrome, a development environment or two, so on and so forth. Most of these applications support full-screen views which mostly or completely hide the window borders, but on the Windows Desktop they take up a full monitor to do so. What I want to do is have 7 "desktops", "regions", call them what you will, each of which is, for the purposes of applications running in it, a "full screen" environment: I'm not tied to Windows Explorer for this, in case it helps - a different window manager that will support this functionality would be a perfectly acceptable answer.

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Hardware/Software inventory open source projects

    - by Dick dastardly
    Dear Stackoverflowers I would like to develop a Network Inventory application that works on any operating system. Reports on every possible resource attacehd to a network. Reports all pertinent details of hardware and software. Thats (and i hate to use the phrase) my "End Game". However I am running before i can crawl here. I have no experience of this type of development, e.g. discovering a computers hardware and software settings. I've spent almost two weeks googling and come up short! :-(. So I am turning to you to ask these questions:- My first step is to find an existing open source project i can incorporate into my own code that extracts the fine grained details i am after, e.g. EVERYTHING there is to know about the hardaware and software on a single machine. Does this project exist? or do i have to develop that first? Have i got to write all this in C? I am guessing getting this information about a computer is going to be easier than for printers, scanners, routers etc... e.g. everything else you would find attached to a network. Once i have access to a single computers details i then need to investigate how i can traverse an entire newtork of printers, scanners, routers, load balancers, switches, firewalls, workstations, servers, storeage devices, laptops, monitors, the list goes on and on One problem i have is i dont have a 1000 machine newtork to play on! Is there any such resource available on theinternet? (is that a silly question?) Anywho, if you dont ask you wont find out! One aspect iam really looking forward to finding out how to travers the entire network, should i be using TCP/IP for this? Whats a good site, blog, usergorup, book for TCP/IP development? How do i go about getting through firewalls? How many questions can i ask in one go? :-) My previous question on this topic ended up with PYTHON being championed as the language/script to go with to develop this application in. Having looked at a few PYTHON examples they all seemed to be related to WINDOWS networks and interrogating Windows Management Instrumentation (WMI). I had the feeling you cant rely on whats in WMI, and even if you can that s no good for UNIX netwrks. Surely there exist common code for extracting hardware and software details from a computer? Why cant i find it on the internet? Pease help? Theres no prizes though :-( Thanks in advance I would like to appologise if i have broken forum rules or not tried hard enough on my own before asking for assistance. I just would like to start moving forward with this as its one of the best projects i have been involved with. I am inspired by the many differnt number of challenges involved and that if i manage to produce a useful application at the end of it it would hopefully be extremely helpful to many people. That sit Thanks in advance DD

    Read the article

  • ASP, sorting database with conditions using multiple columns...

    - by Mitch
    First of all, I'm still working in classic ASP (vbScript) with an MS Access Database. And, yes I know its archaic, but I'm still hopeful I can do this! So now to my problem: Take the following table as an example: PROJECTS ContactName StartDate EndDate Complete Mitch 2009-02-13 2011-04-23 No Eric 2006-10-01 2008-11-15 Yes Mike 2007-05-04 2009-03-30 Yes Kyle 2009-03-07 2012-07-08 No Using ASP (with VBScript), and an MS Access Database as the backend, I’d like to be able to sort this table with the following logic: I would like to sort this table by date, however, depending on whether a given project is complete or not I would like it to use either the “StartDate” or “EndDate” as the reference for a particular row. So to break it down further, this is what I’m hoping to achieve: For PROJECTS where Complete = “Yes”, reference “EndDate” for the purpose of sorting. For PROJECTS where Complete = “No”, reference “StartDate” for the purpose of sorting. So, if I were to sort the above table following these rules, the output would be: PROJECTS ContactName StartDate EndDate Complete 1 Eric 2006-10-01 2008-11-15* Yes 2 Mitch 2009-02-13* 2011-04-23 No 3 Kyle 2009-03-07* 2012-07-08 No 4 Mike 2007-05-04 2009-03-30* Yes *I’ve put a star next to the date that should be used for the sort in the table above. NOTE: This is actually a simplified version of what I really need to do, but I think that if I could just figure this out, I’ll be able to do the rest on my own. ANY HELP IS GREATLY APPRECIATED; I’VE BEEN STRUGGLING WITH THIS FOR FAR TOO LONG NOW! Thank you!

    Read the article

  • Doctrine_Table_Exception: Unknown relation alias shoesTable in /home/sadiqsof/public_html/projects/g

    - by Sadiqur Rahman
    I am getting following error message: Doctrine_Table_Exception: Unknown relation alias shoesTable in /home/sadiqsof/public_html/projects/giftshoes/system/database/doctrine/Doctrine/Relation/Parser.php on line 237 My Code is below: ------------BaseShoe------------ <?php // Connection Component Binding Doctrine_Manager::getInstance()->bindComponent('Shoes', 'sadiqsof_giftshoes'); /** * BaseShoes * * This class has been auto-generated by the Doctrine ORM Framework * * @property integer $sku * @property string $name * @property string $keywords * @property string $description * @property string $manufacturer * @property float $sale_price * @property float $price * @property string $url * @property string $image * @property string $category * @property Doctrine_Collection $Viewes * * @package ##PACKAGE## * @subpackage ##SUBPACKAGE## * @author ##NAME## <##EMAIL##> * @version SVN: $Id: Builder.php 6820 2009-11-30 17:27:49Z jwage $ */ abstract class BaseShoes extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('shoes'); $this->hasColumn('sku', 'integer', 4, array( 'type' => 'integer', 'fixed' => 0, 'unsigned' => false, 'primary' => true, 'autoincrement' => false, 'length' => '4', )); $this->hasColumn('name', 'string', 255, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '255', )); $this->hasColumn('keywords', 'string', 255, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '255', )); $this->hasColumn('description', 'string', null, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('manufacturer', 'string', 20, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '20', )); $this->hasColumn('sale_price', 'float', null, array( 'type' => 'float', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('price', 'float', null, array( 'type' => 'float', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('url', 'string', null, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('image', 'string', null, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('category', 'string', 50, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '50', )); } public function setUp() { parent::setUp(); $this->hasMany('Viewes', array( 'local' => 'sku', 'foreign' => 'sku')); } } --------------ShoesTable-------- <?php class ShoesTable extends Doctrine_Table { function getAllShoes($from = 0, $total = 15) { $q = Doctrine_Query::create() ->from('Shoes') ->limit($total) ->offset($from); return $q->execute(array(), Doctrine::HYDRATE_ARRAY); } } ---------------Shoes Model----------------- <?php /** * Shoes * * This class has been auto-generated by the Doctrine ORM Framework * * @package ##PACKAGE## * @subpackage ##SUBPACKAGE## * @author ##NAME## <##EMAIL##> * @version SVN: $Id: Builder.php 6820 2009-11-30 17:27:49Z jwage $ */ class Shoes extends BaseShoes { function __construct() { parent::__construct(); $this->shoesTable = Doctrine::getTable('Shoes'); } function getAllShoes() { return $this->shoesTable->getAllShoes(); } }

    Read the article

  • Split large repo into multiple subrepos and preserve history (Mercurial)

    - by Andrew
    We have a large base of code that contains several shared projects, solution files, etc in one directory in SVN. We're migrating to Mercurial. I would like to take this opportunity to reorganize our code into several repositories to make cloning for branching have less overhead. I've already successfully converted our repo from SVN to Mercurial while preserving history. My question: how do I break all the different projects into separate repositories while preserving their history? Here is an example of what our single repository (OurPlatform) currently looks like: /OurPlatform ---- Core ---- Core.Tests ---- Database ---- Database.Tests ---- CMS ---- CMS.Tests ---- Product1.Domain ---- Product1.Stresstester ---- Product1.Web ---- Product1.Web.Tests ---- Product2.Domain ---- Product2.Stresstester ---- Product2.Web ---- Product2.Web.Tests ==== Product1.sln ==== Product2.sln All of those are folders containing VS Projects except for the solution files. Product1.sln and Product2.sln both reference all of the other projects. Ideally, I'd like to take each of those folders, and turn them into separate Hg repos, and also add new repos for each project (they would act as parent repos). Then, If someone was going to work on Product1, they would clone the Product1 repo, which contained Product1.sln and subrepo references to ReferenceAssemblies, Core, Core.Tests, Database, Database.Tests, CMS, and CMS.Tests. So, it's easy to do this by just hg init'ing in the project directories. But can it be done while preserving history? Or is there a better way to arrange this?

    Read the article

  • Authenticate sites with different domain names using the Facebook API

    - by Onema
    We have a CMS that supports multiple sites, one of our features allows our users (The site admin) to connect to the site facebook account to allow status updates, create events and upload pictures to FB from with in the CMS. The authentication needs to occur once since each site may have multiple site admins that do not have access to the site FB user name and password. We use iframe and authenticate using $facebook-require_login() which redirects the user to the FB login and authentication pages. All this works just fine but when the user hits "Allow" the authentication will break as it will only redirect to whatever is in the "Post-Authorize Redirect URL" field making the app obsolete for any other domain except the one in the "Post-Authorize Redirect URL" I know other API's authentication methods like in Vimeo and YouTube will allow you to specify a NEXT parameter which is the equivalent of the "Post-Authorize Redirect URL" and it can be set at run time. How can I make this work for multiple domain names? Any hints on this issue will be of great help

    Read the article

  • How do you organise multiple git repositories?

    - by dbr
    With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?

    Read the article

  • Web hosting for multiple web sites providing system isolation

    - by Justin
    We have a small number of projects where we expect the client will not be maintaining the installed versions of applications we install to power the site (such as Drupal). Given that an important part of security is keeping things updated, we don't want to host these projects on our Plesk-powered dedicated servers that currently host lots of our other client's websites. Our goal is to find a host where we can deploy isolated instances (be these slices, virtual servers, grid servers, etc) for each individual (or groups of 2-3) web sites as we need them. These instances would be completely separate, so that if one web site were hacked it would not impact any other site. Typical hosting requirements: Linux Apache PHP 5 MySQL Supports Drupal Ability to setup a cron task (but we don't need SSH access) Daily backups Virtualized/cloud hosting (we want to avoid shared) Pricing per site is around $25/month OS is patched automatically Some options we have considered but won't work: MediaTemple: Two major data center-wide security incidents and recent downtime foster doubt about this host's technical ability. Slicehost: This would require us to manage the entire server, which we don't want to do. Rackspace Cloud Sites (formerly Mosso): No backup options. Do you have any recommended hosting options for given these requirements?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >