Search Results

Search found 856 results on 35 pages for 'replicate'.

Page 26/35 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • As an admin, what tools do you use to log what you do to your boxes?

    - by Jerry
    I am more of a linux applications developer than an admin. Over time, I've built servers and maintained them, sometimes to offer services, mostly just to develop the applications I work on. Way back when I would create a file in my account to keep notes on what I did on each machine, so that I could replicate that when I migrated to other machines. Nowadays, I install something a private trac installation, install it's blog plugin, and then use that to make notes of everything I install, and most commands that I run, as well as the output. This provides me a combination wiki and blog that I find very useful as a "captain's log". I do this mostly so that when I migrate to a new clean machine, I have a much easier time in bringing it up. And yet, I am always amazed when I see others just install this, delete that, run this, setup this config, ... without seeming to use any way to actually note what they are doing. What do YOU do, and what tools are available? I am especially interested in the transition between maintaining a few machines for a few people and maintaining several to dozens of machines providing a real service. What are the best practices, and where can I find good resources? Thanks!

    Read the article

  • Firefox url / link to a group of saved bookmarks?

    - by This_Is_Fun
    In Firefox you can easily save a group of tabs together. When (re-)accessing this group, the 'cascading' bookmark menu shows each individual bookmark (and under a line) it says "open all in tabs" I'm looking for a way to launch those tabs without going up through the bookmark menu. Possible options: A) Record a simple macro w/ any number of "superuser" utilities* ('A' is not the preferred option, since many 'little-macros' are hard to keep track of) b) Use Autohotkey (similar to option 'A' and more flexible once you learn the basics) c) How does Firefox load all those tabs? The info must be stored somewhere (as a type of URL??) Quick Summary: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. How do I find the content (exact code) of that 'hyper-link', and / or "How do I easily launch the tabs?" .. . New EDIT #1: I'm looking for a way to launch those tabs without going up through the bookmark menu, or cluttering the bookmarks toolbar which I hide anyway :o) .. . New EDIT #2: I tried to keep the question simple and not mentioning Autohotkey programming. The objective is to launch all tabs using a button on an AHK gui. When grawity said, "It's just an ordinary folder containing ordinary bookmarks," he (she) reminds me I can easily find the folder / Now how to launch to urls inside that folder? .. FYI: (Basic-level) AHK works like this: ; Open one folder ButtonWinMerge_Files: Run, C:\Program Files\WinMerge\ Return .. ; Use default web browser for one link ButtonGoogle: Run, http://google.com Return .. . Question still open: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. "How to 'replicate' the way Firefox launches the tabs with one click?"

    Read the article

  • Does anyone know where I could find a 2 input USB voltage meter?

    - by John O
    What we really need is a tiny UPS, of sorts. We'll be hooking up a solar cell and a battery to a single board computer. Currently, that SBC is a custom Pic32 device, and it does it's own UPS and voltage monitoring duties. I've been tasked with trying to replicate all of its features with off the shelf products... and for the most part I've succeeded. But I don't currently have any way to switch between two sources of juice, or monitor when they're getting low. These guys have something: http://www.mini-box.com/picoUPS-100-12V-DC-micro-UPS-system-battery-backup-system I really like it, the price is well within the budget. We might even work it in though it does 12V and I'll probably be using 5V... there are enough engineers on hand to figure out something. But I'd still have no idea what the voltage was for the PV or battery. I was hoping that there was some simple little USB multimeter thing that I could use to monitor this with, but I can't seem to come up with anything. I've found all sorts of cool hardware, but nothing that will help us. Does anyone know of anything?

    Read the article

  • Save and restore multiple layers within a Photoshop action that flattens

    - by SuitCase
    I'm editing comic pages with layers - "background", "foreground", "lineart" and "over lineart". I have a Photoshop action that includes a Mode-Bitmap command, which requires the image to be flattened. I need this part of the action because I use the Halftone Screen method of reducing the greyscale image to bitmap on the "background" layer, creating a certain effect. I am pretty sure there is no filter or anything else that gives the same effect. After the mode is changed to bitmap, my action changes things back to greyscale for further changes. This poses a problem. I only want to do the bitmap mode change on the background layer, and after I do the change I want to restore the layer structure as it was - with the foreground, lineart and over lineart layers back above the now-halftoned background. My current method of saving these layers and restoring them is clumsy. My action is able to automatically save the "foreground" layer by selecting it, cutting it, then pasting it back in after the mode changing is over. But, for the "ink" and "over ink" layers, I have to manually cut these layers, paste them into a new document, and later re-cut and re-paste after running my action. This is so clunky! What I would like to know is if it's possible to set aside my layers in an automated way, and then bring them back in, also in an automated way. An ugly (but functional) solution would be to replicate my actions of creating new documents and pasting them temporarily there, but I don't think Photoshop allows you to do things outside of your current document with an action. It seems to me that the only way to do what I want is to use the "hack" of incorporating the clipboard into the action as a clever hack, but that leaves me stuck as I have two more layers that can't fit onto that same clipboard. Help or suggestions would be appreciated. I can keep on doing it manually, but to have a comprehensive action would save me a ton of time.

    Read the article

  • Oracle 11g Data Guard over a WAN

    - by Dave LeJeune
    Hi - We are in process of looking at using Oracle's Data Guard to replicate our 11g instance from a colo facility in Washington DC to Chicago. To give some basics we have approximately 25TB of storage and a healthy transaction rate in the 1-2K/sec range. Also, because we are processing data in real-time we have a 24x7x365 requirement for processing data. We don't have any respites as far as volume except for system upgrades (once every few months) where we take the system offline but then course experience a spike in transactions when we bring the system back on-line. Ideally we would want the second instance in the DG configuration semi-online in a read-only fashion for reports/etc. We evaluated DG in 10g and were not overly impressed and research seemed to show that earlier versions had issues with replication over a WAN but I have heard good things about modifications the product has gone through w/ 11g. Can anyone confirm an instance of this size and transaction rate being replicated over a WAN and if so what is the general latency? An information or experiences with a DG implementation that is of this size and scope would really be helpful (or larger - I also realize we are still relatively small compared to many others out there). Many thanks in advance.

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • What are possible results/side effects if replication between DC's in a Windows domain is unable to occur?

    - by hydroparadise
    There's plenty of administration literature out there how to properly manage Windows servers. But in dealing with real life, things don't always occur like you want them to. In Microsoft's Windows Server 2003 Administrator's Companion, out of 1400+ pages, theres only one page that I could find when it comes up setting up additional domain controlers. They make it sound seemless and don't reveal a whole lot on what happens if "peer" DC's are unable to replicate. Down to the specific issue at hand, we had a DC go down about a month ago due to a bad RAID controller. There was nothing critical that waranted imediate attention, so bringing it back up got put on the back burner. A month later, we get the DC back up and running and everyting seemed ok. The next day, nobody is able to logon complaining that the "user does not exist" or "unable to establish a trust relationship". Knowing that I had just put the downed DC back on the network, I immediately took it back off the network and had everybody restart the workstations. After that, exchange was fine, shares became available, and everybody was able to log in. After doing some event log swimming, it would appear that everything started due to replication issues on the SYSVOL. I've read where you can force replication, but that would mean putting it back on the network. I am afraid to put the DC back on the network in fear that something else could go wrong. So, what other issues could one expect to run into where two DC's are unreplicated for over a month?

    Read the article

  • How to make a redundant desktop system with daily snapshots? (Is btrfs ready for use?)

    - by TestUser16418
    I want to configure a desktop system in which the home filesystem would be redundant (e.g. RAID-1), and would have weekly snapshots taken. I've already done this with ZFS, the snapshot system is wonderful, and with send/recv you can easily create backups on external media. Unfortunately, at that point, I want GNU+Linux and not FreeBSD or Solaris, so I'm looking for suggestions for good alternatives. I reckon that my alternatives are: btrfs - it seems to be exactly what I need, it has snapshots and commands that allow you to easily replicate zfs send. Yet all documentation mentions that it's still experimental. I can't seem to find any actual reports on its reliability or usability issues. Can you point me to any information on that issue that could clarify whether it would be a possible choice? I have a large preference for this option, mostly because I don't want to reformat the drives when btrfs becomes ready, but I there's no information on whether it's usable at all, whether it's a silly idea to use it, etc. The question that I cannot get the answer to is what does "experimental" mean. lvm snapshots and ext4 - preferably not, since it can consume an awful amount of space when new files are created. Creating 200 GB files requres 200 GB free space and 200 GB additionally for snapshots. I also have found it unreliable -- failed metadata rewrite results in an unreadable PV. I'm wondering how btrfs would compare here. A single filesystem (ext4) on a RAID-1 array with custom COW snapshots with hardlinks (like cp -al). That's my current preference if I can't use btrfs. So how experimental btrfs is, which should I choose, and do I have any other options? What if I don't keep external incremental backups, would that affect my choice?

    Read the article

  • Exchange 2007 Backup - For a newbie

    - by mew3900
    I am trying to setup an exchange 2007 backup solution. After doing a lot of reading, Microsoft have decided in server 2008 unless you are willing to spend a great deal on a 3rd party solution you are pretty stuck! Essentially what I have been asked to do is perform an off-line file backup of our current exchange server and replicate this onto a new 2nd server. The reasoning behind this is that we need to upgrade our current installation of exchange 2007 to SP2 so that the exchange plug-in for windows server backup will be available to us. From this I can then actually take an exchange aware backup weekly and take it off site. Ideally then also we can migrate to this new server and keep the old one as a fail over. Is there a way I can copy across the files required onto a second server, although I doubt very much it is that simple. I may be barking up completely the wrong tree, however I have very limited knowledge with Exchange and any help and advice on how I would resolve this would be much appreciated. Thanks in advance

    Read the article

  • processing of Group Policy failed only on 2008 Servers and Name Resolution failure on the current domain controller

    - by Ken Wolfrom
    Spent last 3 months doing a upgrade from 2003 domain to a 2008R2 domain. our last DC was rebuilt (5 total) and brought up on line. After it was put on line we have some 2008 and 2008R2 servers (10 now) getting these errors in the event logs. ERRORS Description: The processing of Group Policy failed. Windows could not resolve the user name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).\ Can duplicate this if we drop to command prompt and run GPUPDATE manually When our users attempt to do a \directory\shared access to shared drive on an affected server get this error.– “THERE ARE CURRETLY NO LOGON SERVER AVAIALBE TO SERICE THE LOGON REQUEST. This is only affecting the 2008 OS and it is a random set of abotu 10 servers out of some 30 with this OS. The Services on the machines are running Ok and login. Able to log in with domain/user to the consoles and via RDP. WE can log onto an affected machine, and can get to the \domainname\sysvol and can see the GPO's Have checked the replication topology of the domain and it states all servers can replicate with no errrors. We went back to the last DC, demoted it, removed DNS and then removed it from the domain and waited 24 hours and issue still persist. Picked one server, removed it from domain, reboooted, and added back to domain with no problems, but still has this behavior. bottom line is we have some servers that the domain will not let any UDP/client server apps or GPO's process ,but the tcp related items seeme to work fine, http, tcp calls, sql and oracle dbs's connect and process. Any inputs on some possible reasons for this issue and fixes. It is only affecting the 2008 servers on a 2008R2 domain.

    Read the article

  • How do you get linux to honor setuid directories?

    - by Takigama
    Some time ago while in a conversation in IRC, one user in a channel I was in suggested someone setuid a directory in order for it to inherit the userid on files to solve a problem someone else was having. At the time I spoke up and said "linux doesn't support setuid directories". After that, the person giving the advice showed me a pastebin (http://codepad.org/4In62f13) of his system honouring the setuid permission set on a directory. Just to explain, when i say "linux doesnt support setuid directories" what I mean is that you can go "chmod u+s directory" and it will set the bit on the directory. However, linux (as i understood it) ignores this bit (on directories). Try as I might, I just cant quite replicate that pastebin. Someone suggested to me once that it might be possible to emulate the behaviour with selinux - and playing around with rules, its possible to force a uid on a file, but not from a setuid directory permission (that I can see). Reading around on the internet has been fairly uninformative - most places claim "no, setuid on directories does not work with linux" with the occasional "it can be done under specific circumstances" (such as this: http://arstechnica.com/etc/linux/2003/linux.ars-12032003.html) I dont remember who the original person was, but the original system was a debian 6 system, and the filesystem it was running was xfs mounted with "default,acl". I've tried replicating that, but no luck so far (tried so far with various versions of debian, ubuntu, fedora and centos) Can anyone clue me in on what or how you get a system to honor setuid on a directory?

    Read the article

  • mysqld - master to slave replication using rsync innodb, sequence number issues

    - by Luis
    I've read several of the related topics posted here, but I have not been able to avoid this innodb error. The steps I've taken to replicate data from a Slackware server - 5.5.27-log (S) to a FreeBSD slave - 5.5.21-log (F) were these: (S) flush tables with read lock; (S) in another terminal show master status; (S) stop mysqld via command line in third terminal; (F) while both servers are stopped, rsync mysql datadir from (S), excluding master.info, mysql-bin and relay-* files; (F) start mysqld (skip-slave) 121018 12:03:29 InnoDB: Error: page 7 log sequence number 456388912904 InnoDB: is in the future! Current system log sequence number 453905468629. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: for more information. This kind of error happens for a lot of tables. I know I can use dump, but the database is large, ca. 70GB and the systems are slow (old), so would like to get this replication to work with data transfer. What should I try to solve this issue?

    Read the article

  • Git workflow for two tight-knit projects

    - by Pioul
    Two very similar projects I'm maintaining an online Markdown editor using Git as RCS (and accessorily made available on GitHub). From this web app, I've created a Chrome app: the code is the same, aside from some Chrome technicalities. I care about open sourcing these two projects. Still, the Chrome app's code being the same as the web app's except for some dull details, I've first chosen to (1) not publish the Chrome app on GitHub, and (2) not use Git to manage its code. Instead, I would manually review the web app's commits, then replicate the few changes in the Chrome app. … slightly drifting apart However, I've decided to add a feature to the Chrome app only. So, even though both codebases will remain broadly similar, they'll be diverging enough to make me reconsider the rationale behind my initial decision to not version control nor share the Chrome app's source code. Since I'm now willing to use Git to version control both apps, and that I want to share both of them on GitHub, how should I go about it? Should I use two different repositories, or one repo with two long-running branches? What would be the pros and cons of each approach in that context? What would be the easiest/fastest way to regularly "import" commits from the web app to the Chrome app, since the web app is going to remain the master branch? Is cherry-picking the only solution?

    Read the article

  • BYOD (accessing files) on a domain without joining?

    - by Philip White
    I run a Samba 4 instance at a small private school. This makes a regular Linux server appear as a directory controller. There are two relevant benefits to this: I have a Samba share for people's documents, and I use the Redirected Folders feature to allow any employee to sit down at any PC, log in with their domain credentials, and their My Documents points to network storage. Everyone has a mapped drive (using Group Policy Preferences) to a share specific to their account type. Students can access one share (one share for all students), teachers have another, and office staff have another. However, I would like to allow BYOD (Bring Your Own Device). Some employees are already asking for it with their personal laptops, and I know eventually most everyone will want to. Is there any way to replicate the two features above without having to join PCs to the domain? Joining personal PCs is impractical if only because only professional editions of Windows support this. Ideally, any operating system (including mobile) could access the relevant shares, but of course Windows is key. Offline caching is optional. (I could set up OpenVPN for teachers who want to access their files from home.) The problem with simply giving SSH access to the relevant shares is primarily that Samba 4 relies on ext4 ACLs and ext4 extended attributes to maintain NTFS permissions. Writing files directly to the Linux server would bypass this and would (probably) not be interoperable with Samba4. Right now I am completely flexible. I am even fine with scrapping the whole domain and using some other software for the two features above. How can I allow school employees and students freedom to securely share files without requiring everyone to have specific editions of Windows?

    Read the article

  • IIS HTTP Error 403.1 - Forbidden: Execute access is denied

    - by coxymla
    I have a ASP.NET 1.1 application running on IIS 6 / Windows Server 2003. It's our application, but we're trying to specifically replicate a customer's installation so the app folder has been copied entirely from their production server onto our test machine, and then we've created the Virtual Directory and Web Application for IIS manually. The problem I have is that when we access the app, we get the standard IIS security error message: The page cannot be displayed You have attempted to execute a CGI, ISAPI, or other executable program from a directory that does not allow programs to be executed. -------------------------------------------------------------------------------- Please try the following: •Contact the Web site administrator if you believe this directory should allow execute access. HTTP Error 403.1 - Forbidden: Execute access is denied. Internet Information Services (IIS) Now this is pretty standard, except as far as I can see it's not anything so simple. I have checked: IIS user has read access to the directory IIS user and Network Service users have read/write access to the Temporary ASP.NET Files folder Virtual directory is set to the correct version of ASP.NET ASP.NET 1.1 Web Service Extension is allowed Virtual directory has the correct mappings of file extensions and all verbs to the aspnet 1.1 DLL Virtual directory properties allow Scripts and Executables to be run Anonymous access is turned on and the username and password is correct What am I missing?

    Read the article

  • C# Sockets Buffer Overflow No Error

    - by Michael Covelli
    I have one thread that is receiving data over a socket like this: while (sock.Connected) { // Receive Data (Block if no data) recvn = sock.Receive(recvb, 0, rlen, SocketFlags.None, out serr); if (recvn <= 0 || sock == null || !sock.Connected) { OnError("Error In Receive, recvn <= 0 || sock == null || !sock.Connected"); return; } else if (serr != SocketError.Success) { OnError("Error In Receive, serr = " + serr); return; } // Copy Data Into Tokenizer tknz.Read(recvb, recvn); // Parse Data while (tknz.MoveToNext()) { try { ParseMessageAndRaiseEvents(tknz.Buffer(), tknz.Length); } catch (System.Exception ex) { string BadMessage = ByteArrayToStringClean(tknz.Buffer(), tknz.Length); string msg = string.Format("Exception in MDWrapper Parsing Message, Ex = {0}, Msg = {1}", ex.Message, BadMessage); OnError(msg); } } } And I kept seeing occasional errors in my parsing function indicating that the message wasn't valid. At first, I thought that my tokenizer class was broken. But after logging all the incoming bytes to the tokenizer, it turns out that the raw bytes in recvb weren't a valid message. I didn't think that corrupted data like this was possible with a tcp data stream. I figured it had to be some type of buffer overflow so I set sock.ReceiveBufferSize = 1024 * 1024 * 8; and the parsing error never, ever occurs in testing (it happens often enough to replicate if I don't change the ReceiveBufferSize). But my question is: why wasn't I seeing an exception or an error state or something if the socket's internal buffer was overflowing before I changed this buffer size?

    Read the article

  • How do I use VS2010 One-Click Publish (MsDeploy) to deploy remotely from the command line?

    - by David
    On the remote web server I have installed the remote service http://x.x.x.x/MsDeployAgentService. If I use the Web Application Project's Publish command in VS2010 I can successfully publish to this remote web server and update a specific IIS website. What I want to do now is execute this capability from the command line. I am guessing it is two steps. First build the web application project using the relevant build configuration: msbuild "C:\MyApplication\MyWebApplication.csproj" /T:Package /P:Configuration=Release Then issue the MsDeploy command to have it publish/sync with the remove IIS server: msdeploy -verb:sync -source:package="C:\MyApplication\obj\Release\Package\MyWebApplication.zip" -dest:contentPath="My Production Website", computerName=http://x.x.x.x/MsDeployAgentService, username=adminuser,password=adminpassword Unfortunately I get an the error: Error: (10/05/2010 3:52:02 PM) An error occurred when the request was processed on the remote computer. Error: Source (sitemanifest) and destination (contentPath) are not compatible for the given operation. Error count: 1. I have tried a number of different combinations for destination provider but no joy :( Has anyone managed to replicate VS2010 Web Application Project "One Click" Publish from the command line?

    Read the article

  • Problem Rendering SIFR with revision 436 on IE6 and IE7

    - by Mark
    Hi, I seem to have a problem with SIFR3. I'm using version 436 and from all my testing it appears to be a problem associated with IE6 and IE7 as I cannot replicate the issue in Firefox, Chrome, Safari for Windows, or even IE8. The problem is occurring on my company's website and can be seen here: http://www.wyldeia.co.uk/blog.php When you first go to the page in IE6 or IE7 it appears to render fine. However if you click away onto another page and then click the back button in the browser, all of the text is replaced by an error saying "Rendered with sIFR3 revision 436". If you refresh the page, then the problem goes away, that is until you browse away and come back again. I've tried this on two separate machines both running IE7.0.6000.16809, and a further separate machine running IE6 which I then upgraded to IE8. I thought initially it might be Flash player related but on upgrading from version 9 to 10 of the flash player the problem remains. Further digging around indicated that the error can be caused by having a corrupted flash font file, or having one present that was generated with a previous revision of SIFR3. However I have exported the flash font file using the supplied fla with revision 436 but the problem remains. Usually I like to track the answer down myself but as it is I'm at a bit of a loss on this one so if anyone has any ideas what might be happening here then I would be very grateful! Regards, Mark

    Read the article

  • Large Django application layout

    - by Rob Golding
    I am in a team developing a web-based university portal, which will be based on Django. We are still in the exploratory stages, and I am trying to find the best way to lay the project/development environment out. My initial idea is to develop the system as a Django "app", which contains sub-applications to separate out the different parts of the system. The reason I intended to make these "sub" applications is that they would not have any use outside the parent application whatsoever, so there would be little point in distributing them separately. We envisage that the portal will be installed in multiple locations (at different universities, for example) so the main app can be dropped into a number of Django projects to install it. We therefore have a different repository for each location's project, which is really just a settings.py file defining the installed portal applications, and a urls.py routing the urls to it. I have started to write some initial code, though, and I've come up against a problem. Some of the code that handles user authentication and profiles seems to be without a home. It doesn't conceptually belong in the portal application as it doesn't relate to the portal's functionality. It also, however, can't go in the project repository - as I would then be duplicating the code over each location's repository. If I then discovered a bug in this code, for example, I would have to manually replicate the fix over all of the location's project files. My idea for a fix is to make all the project repos a fork of a "master" location project, so that I can pull any changes from that master. I think this is messy though, and it means that I have one more repository to look after. I'm looking for a better way to achieve this project. Can anyone recommend a solution or a similar example I can take a look at? The problem seems to be that I am developing a Django project rather than just a Django application.

    Read the article

  • ICollectionView.SortDescriptions sort does not work if underlying DataTable has zero rows

    - by BigBlondeViking
    We have a WPF app that has a DataGrid inside a ListView. private DataTable table_; We do a bunch or dynamic column generation ( depending on the report we are showing ) We then do the a query and fill the DataTable row by row, this query may or may not have data.( not the problem, an empty grid is expected ) We set the ListView's ItemsSource to the DefaultView of the DataTable. lv.ItemsSource = table_.DefaultView; We then (looking at the user's pass usage of the app, set the sort on the column) Sort Method below: private void Sort(string sortBy, ListSortDirection direction) { ICollectionView dataView = CollectionViewSource.GetDefaultView(lv.ItemsSource); dataView.SortDescriptions.Clear(); var sd = new SortDescription(sortBy, direction); dataView.SortDescriptions.Add(sd); dataView.Refresh(); } In the Zero DataTable rows scenario, the sort does not "hold"? and if we dynamically add rows they will not be in sorted order. If the DataTable has at-least 1 row when the sort is applied, and we dynamically add rows to the DataTable, the rows com in sorted correctly. I have built a standalone app that replicate this... It is an annoyance and I can add a check to see if the DataTable was empty, and re-apply the sort... Anyone know whats going on here, and am I doing something wrong? FYI: What we based this off if comes from the MSDN as well: http://msdn.microsoft.com/en-us/library/ms745786.aspx

    Read the article

  • Pixel Perfect Collision Detection in HTML5 Canvas

    - by Armin Ronacher
    Hi, I want to check a collision between two Sprites in HTML5 canvas. So for the sake of the discussion, let's assume that both sprites are IMG objects and a collision means that the alpha channel is not 0. Now both of these sprites can have a rotation around the object's center but no other transformation in case this makes this any easier. Now the obvious solution I came up with would be this: calculate the transformation matrix for both figure out a rough estimation of the area where the code should test (like offset of both + calculated extra space for the rotation) for all the pixels in the intersecting rectangle, transform the coordinate and test the image at the calculated position (rounded to nearest neighbor) for the alpha channel. Then abort on first hit. The problem I see with that is that a) there are no matrix classes in JavaScript which means I have to do that in JavaScript which could be quite slow, I have to test for collisions every frame which makes this pretty expensive. Furthermore I have to replicate something I already have to do on drawing (or what canvas does for me, setting up the matrices). I wonder if I'm missing anything here and if there is an easier solution for collision detection.

    Read the article

  • Need to replace 3rd party WinForm controls, what's the closet WPF equivalent?

    - by Refracted Paladin
    I am tired of Windows Forms...I just am. I am not trying to start a debate on it I am just bored with it. Unfortunately we have become dependent on 4 controls in DevExpress XtraEditors. I have had nothing but difficulties with them and I want to move on. What I need now is what the closet replacement would be for the 4 controls I am using. Here they are: LookUpEdit - this is a dropdown that filters the dropdown list as you type. MemoExEdit - this is a textbox that 'pops up' a bigger area when it has focus CheckedComboBoxEdit - this is a dropdown of checkboxes. CheckedListBoxControl - this is a nicely columned list box of checkboxes This is a LOB app that has tons of data entry. In reality, the first two are nice but not essential. The second two are essential in that I would either need to replicate the functionality or change the way the users are interacting with that particular data. I am looking for help in replicating these in a WPF environment with existing controls(codeplex etc) or in straight XAML. Any code or direction would be greatly appreciated but mostly I am hoping to avoid any commercial 3rd party WPF and would instead like to focus on building them myself(but I need direction) or using Codeplex

    Read the article

  • Odd Infragistics UltraComboEditor data binding non-bug

    - by Richard Dunlap
    Within an Infragistics 8.2 UltraComboEditor, we had the following properties set via C#: DataSource = dataSource; ValueMember = "Measure"; DisplayMember = "Name"; DataBindings.Add("Value", repository, "Measure"); DataBindings["Value"].DataSourceUpdateMode = DataSourceUpdateMode.OnPropertyChanged; where dataSource was an array of objects, each with a property Measure, and repository was an object with a property Measure. (Those strings are actually constructor parameters -- just using explicit strings to simplify the example.) In the course of some refactoring, the name of the property on the objects in the array was changed to BaseEnum (the objects are actually wrapped enumerations, for the curious), but the name of ValueMember above was not changed. And yet, the combo box binding continued to work through initial testing, beta testing, and even after release... until two customers emailed in noting that the combo box was no longer changing the underlying parameter. We were able to dig out the problem by careful study of the source code repository... despite being in the awkward position of not being able to replicate the buggy behavior internally. Two part question: What's happening under the hood that allowed the binding to continue to function, and/or what might be unique about those two users that caused the binding to (correctly) fail? (O/S version isn't alone the answer, and we get the unexpectedly functioning binding on machines that have never had a version of the software before, so we're not looking at rogue binaries). Are there tools that might have been able to warn us about the misbind, even if something was cleaning up behind?

    Read the article

  • QWebView not loading external resources

    - by Nick
    Hi. I'm working on a kiosk web browser using Qt and PyQt4. QWebView seems to work quite well except for one quirk. If a URL fails to load for any reason, I want to redirect the user to a custom error page. I've done this using the loadFinished() signal to check the result, and change the URL to the custom page if necessary using QWebView.load(). However, any page I attempt to load here fails to pull in external resources like CSS or images. Using QWebView.load() to set the initial page at startup seems to work fine, and clicking any link on the custom error page will result in the destination page loading fine. It's just the error page that doesn't work. I'm really not sure where to go next. I've included the source for an app that will replicate the problem below. It takes a URL as a command line argument - a valid URL will display correctly, a bad URL (eg. DNS resolution fails) will redirect to Google, but with the logo missing. import sys from PyQt4 import QtGui, QtCore, QtWebKit class MyWebView(QtWebKit.QWebView): def __init__(self, parent=None): QtWebKit.QWebView.__init__(self, parent) self.resize(800, 600) self.load(QtCore.QUrl(sys.argv[1])) self.connect(self, QtCore.SIGNAL('loadFinished(bool)'), self.checkLoadResult) def checkLoadResult(self, result): if (result == False): self.load(QtCore.QUrl('http://google.com')) app = QtGui.QApplication(sys.argv) main = MyWebView() main.show() sys.exit(app.exec_()) If anyone could offer some advice it would be greatly appreciated.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >