Search Results

Search found 7209 results on 289 pages for 'wpf triggers'.

Page 248/289 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • How do I keep a table in sync across multiple SQL Databases?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app." Go figure!

    Read the article

  • Permission / owner issue with pushing to git when editing directly from repo?

    - by Susan
    I have a web interface for deploying scripts from our repo at Github to our live server. The web interface just triggers a bash script with some git commands. If I make changes locally, push to repo, then run the bash script to pull from repo to live it works fine. However, if I make changes directly in the repo (via Github's web interface), I'm running into fast-forward / lock issues. These are the steps I'm taking: Make a change on a file at Github repo Run a bash script (as apache) via web from live server that attempts a git push / pull. Get these problems: PUSH To [email protected]:name/name.git ! [rejected] master - master (non-fast-forward) error: failed to push some refs to '[email protected]:name/name.git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. PULL From github.com:name/name branch master - FETCH_HEAD error: unable to unlink old 'includes/footer.inc' (Permission denied) Updating 8f6d922..d1eba9d Updating 8f6d922..d1eba9d SSH in as root, attempt a push / pull and it works fine. Ideas on why would this method not work from apache?

    Read the article

  • Cannot find FIS partition 'initramfs'......... need help!!!

    - by vikramtheone
    Hi Guys, I have a Ubuntu 9.04 Linux running on Freescale's i.MX515(ARM Cortex based) board with me. There were about 250 updates pending and I did that today, well some of the updates failed because of the infamous errors: E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. E: Couldn't rebuild package cache E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. So, when I do the 'sudo dpkg --configure -a' I get new errors related to FIS partition: Cannot find FIS partition 'initramfs' User postinst hook script [/usr/sbin/flash-kernel] exited with value 1 dpkg: error processing linux-image-2.6.28-18-imx51 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of linux-image-imx51: linux-image-imx51 depends on linux-image-2.6.28-18-imx51; however: Package linux-image-2.6.28-18-imx51 is not configured yet. dpkg: error processing linux-image-imx51 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-imx51: linux-imx51 depends on linux-image-imx51 (= 2.6.28.18.23); however: Package linux-image-imx51 is not configured yet. dpkg: error processing linux-imx51 (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.28-18-imx51 Cannot find FIS partition 'initramfs' dpkg: subprocess post-installation script returned error exit status 1 Whats going wrong here, need help!!! I'm a newbie. Regards Vikram

    Read the article

  • How do you use a Mac without a mouse?

    - by NitroxDM
    So I was using my Mac Mini and decided to set the application button on Logic Click mouse to open Exposé. I had this set up at one time and found it useful. So I open the Exposé control panel and took a look at the triggers for Exposé. There is drop down for a secondary trigger. In that drop down there was a ton of mouse buttons 100+. So I selected #32 just to see what button #32 was on my 5 button mouse. Turns out it's all of them! So no matter what button I press guess what happens... I get Exposé. So using the just the keyboard how do I: Change the Exposé trigger? I can use the keyboard to bring up the control Exposé panel but guess what I can't get anything else to come in to focus. Kill Exposé? Right now I can say is D'oh! Any ideas?

    Read the article

  • Conditionally Rewrite Email Headers (From & Reply-To) Exchange 2010

    - by NorthVandea
    I have a client who maintains Company A (with email addresses %username%@companyA.com) and they own the domain companyB.com however there is no "infrastructure" (no Exchange server) set up specifically for companyB.com. My client needs to be able to have the end users within it's company (companyA.com) add a specific word or phrase to the Subject (or Body) line of the Outgoing email (they are only concerned with outgoing, incoming is a non-issue in this case) that triggers the Exchange 2010 servers to rewrite the header From and Reply-To [email protected] with [email protected] but this re-write should ONLY occur if the user places the key word/phrase in the Subject (or Body). I have attempted using Transport Rules and the New-AddressRewriteEntry cmdlet however each seems to have a limitation. From what I can tell Transport Rules cannot re-write the From/Reply-To fields and New-AddressRewriteEntry cannot be conditionally triggered based on message content. So to recap: User sends email outside the organization: From and Reply-To remain [email protected] User sends email outside the organization WITH "KeyWord" in the Subject or Body: From and Reply-To change to [email protected] automatically. Anyone know how this could be done WITHOUT coding a new Mail Agent? I don't have the programming knowledge to code a custom Agent... I can use any function of Exchange Management Shell or Console. Alternatively if anyone knows of a simple add-on program that could do this that would be good too. Any help would be greatly appreciated! Thank you!!!

    Read the article

  • Has anyone managed to build php5-xapian on Ubuntu 12.04?

    - by jetboy
    As Xapian's been dropped from the Ubuntu repositories, I'm attempting to build my own .deb from the instructions here: http://article.gmane.org/gmane.comp.search.xapian.general/8855 http://beeznest.wordpress.com/2011/07/06/howto-build-your-own-binaries-of-php-xapian-bindings-for-debian/ I can only get things to progress beyond the first few seconds by leaving out 'rm debian/control', but if I do, it looks as if the Python and Ruby bindings are building and passing their versions of smoketest correctly. However, the PHP part of the build is failing with this error: /home/charlie/xapian-bindings-1.2.8/php/smoketest.php:38: include(xapian.php): failed to open stream: No such file or directory FAIL: smoketest.php There's a xapian.php file in /home/charlie/xapian-bindings-1.2.8/php/php5/ but if I copy it to /home/charlie/xapian-bindings-1.2.8/php/ or change the path to it in smoketest.php, the build fails right near the start with: dpkg-source: error: aborting due to unexpected upstream changes Unfortunately I'm out of my comfort zone building from source. Anyone got any ideas? Edit post James' answer: Builds fine if I follow instructions exactly. I built it on a test VM initially, but that didn't build the PHP package as PHP itself wasn't installed. Obvious gotcha, but worth mentioning. Installing generated the following error: Setting up php5-xapian (1.2.8-1) ... Processing triggers for libapache2-mod-php5 ... dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/libapache2-mod-php5.postinst): Permission denied ssion denied dpkg: error processing libapache2-mod-php5 (--install): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: libapache2-mod-php5 It's only a script for restarting Apache. Stopping Apache before running sudo dpkg -i php5-xapian_*.deb prevents the error. Xapian now shows up in phpinfo(). Job done. Thanks.

    Read the article

  • Possible boot conflict?

    - by Evan Kroske
    I was installing Ubuntu on a computer on which Windows XP was already installed. The computer has multiple hard drive bays, so I decided to remove the XP HDD and install Ubuntu on a blank HDD when it was the only HDD in the system. Unfortunately, if I now try to boot Ubuntu with the Windows XP drive in the second slot, nothing will boot. However, if Windows XP is in the first slot, it will boot fine. Can anybody explain why this happens? When I was checking out the BIOS to see if something was messed up, I discovered that when Ubuntu is in the first slot, the BIOS doesn't recognize any HDDs. However, if XP is in the first slot, the BIOS recognizes both drives. Any hypotheses about why this happens? Edit: Here's the setup. I have an old server with seven SCSI HDD slots. I have five identical 68 Gb SCSI drives, but I can keep only two plugged in. XP is still installed on the first drive, but I reinstalled Ubuntu on the second drive and had Grub overwrite the XP bootloader on the first drive. Now, the setup works fine, and I can use Grub to load either XP or Ubuntu. However, if I plug in another identical blank HDD in the third slot, the computer recognizes only the XP drive and doesn't boot. Grub starts to load, then gives me a "disk not found" error. Running ls from the grub rescue prompt only shows one drive with two partitions. I guess this is a BIOS problem, but I'd still like to know what triggers it. What about a blank drive could cause the BIOS to freak out?

    Read the article

  • Data from a table in 1 DB needed for filter in different DB...

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID]

    Read the article

  • running chkdsk /F on a large mounted NTFS image file gets BSOD (Windows Vista)

    - by Citizentools
    Using ddrescue, I've created ISO files from the C: and D: drives on my Windows XP laptop's harddisk (after the laptop stopped booting and chkdsk etc. wouldn't fix it). I was able to mount the 60 GB D.iso file use OSFmount, and successfully recreated the D: drive on another laptop. The C.iso image is more problematic. ddrescue left about 3mb unrecovered of 85 GB total, after multiple passes (no big worries about this) and I'm able to mount it with OSFmount on a Windows Vista laptop. However, when I run chkdsk /F /V on the mounted drive (which was mounted as H:), I consistently get a blue screen (BSOD). CHKDSK makes it through the first three passes, including index fixing and security descriptor fixes, without errors, but triggers a BSOD when it attempts to fix the volume records or bitmap If I attempt to fix the drive by clicking on Properties-Tools-Error checking-Check Now-Automatically fix file system errors, I get an alert box reading "WIndows was unable to complete the disk check." I'd try a tool other than OSFMount, but it's the only thing I've found so far that will mount large ISO files, and it has worked for me up to now in this process. [Update 2011-11-13 18:41 EST] Just ran the same process using the original Windows XP laptop, with a different internal drive, and chkdsk worked like a champ. So the question is still interesting, but decidedly less urgent.

    Read the article

  • How do I use key combinations on an axis on a joystick in xorg?

    - by valadil
    I'm using xserver-xorg-input-joystick on Debian Stable so I can use a joystick in place of the mouse. I have mouse movement working correctly, but got stuck trying to add functions for some other keys. These work: #Left stick #Pointer Option "MapAxis1" "mode=relative axis=1.5x" Option "MapAxis2" "mode=relative axis=1.5y" #Right stick #Arrow keys Option "MapAxis4" "mode=relative keylow=Left keyhigh=Right" Option "MapAxis5" "mode=relative keylow=Up keyhigh=Down" But when I try to make key combos (so I can navigate windows and screens in xmonad) I have no luck. #dpad #xmonad focus #up/down toggle window. l/r choose screen. Option "MapAxis8" "mode=relative keylow=Super_L,k keyhigh=Super_L,j" Option "MapAxis7" "mode=relative keylow=Super_L,w keyhigh=Super_L,e" I've also tried Super_R, plain old Super, Meta, and mod4mask, and anything else I can think of. These buttons print the letter, but don't appear to hold down the modifying key. The exception to that is shift. If I specify Shift_L or Shift_R, I get a capital letter. xev indicates that modifier keys are being pressed. If I lower Axis8, I get press Super_L, press k, release k, release Super_L. That looks like it should be working. Maybe this is an xmonad problem and not a joystick driver one? I'm also having trouble with getting an axis to use other XF86 keys: # triggers # song selection Option "MapAxis3" "mode=relative keylow=none keyhigh=XF86AudioForward" Option "MapAxis6" "mode=relative keylow=none keyhigh=XF86AudioBack" That does nothing. Any idea why? If it turns out that this isn't something I can do on an axis, but would work with a button, is there a way to treat my joysticks as buttons? Also, if anyone has suggestions for the other 5 buttons I'll have left after mouse buttons are bound, I'm listening.

    Read the article

  • Install VirtualBox on Ubuntu 12.04.1 (on [Samsung] Chromebook)

    - by iphonedev7
    I have dual booted Ubuntu Linux 12.04.1 LTS on my Samsung Series 5 ChromeBook, and am trying to run/install Oracle VirtualBox (from the generic .run file downloaded from their website). However, every time I try to run it (as root from the command line), it gives me the following error occurs: Please install the build and header files for your current Linux kernel. The current kernel version is 3.4.0 Problems were found which would prevent VirtualBox from installing. I have tried the version from the Software Center, as well as the command line installation, both of which gave me errors based on my linux-headers/linux-kernel/linux-[kernel]-image. Here's an error I keep getting (on the command line): First Installation: checking all kernels... It is likely that 3.4.0 belongs to a chroot's host Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic ERROR (dkms apport): kernel package linux-headers-3.5.0-18-generic is not supported Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place ...And one of the more cryptic errors I get when trying to start any Virtual Machine: Result Code: NS_ERROR_FAILURE (0x80004005) Component: Machine Interface: IMachine {5eaa9319-62fc-4b0a-843c-0cb1940f8a91}

    Read the article

  • windows 95 virtualization and CPU idle

    - by brtlvrs
    We are in the situation that we need to migrate our windows 95 systems (i know, that is so last century ). There is no hardware for a reasonable price available to run our windows 95 systems. So we are extending the overdue lifetime of it, by making them virtual. Now we see that VMware reports 100% CPU utilization of a windows 95 VM. This is because windows 95 doesn't know how to manage a CPU. For this reason software like rain or Waterfall, or CPUCool are introduced. The they are sending a HLT instruction to the CPU. This causes the CPU to halt and wait for new triggers to work. I've tested the mentioned programs in the VM, but they don't work, but generate errors. anyone has a valid workaround, solution ?? btw. I know that the best solutiuon is to replace the windows95 with windows XP. But in our situation that will take at least 5 years. Our windows 95 systems are running factory process control software.....

    Read the article

  • Heartbeat/DRBD failover didn't work as expected. How do I make the failover more robust?

    - by Quinn Murphy
    I had a scenario where a DRBD-heartbeat set up had a failed node but did not failover. What happened was the primary node had locked up, but didn't go down directly (it was inaccessible via ssh or with the nfs mount, but it could be pinged). The desired behavior would have been to detect this and failover to the secondary node, but it appears that since the primary didn't go full down (there is a dedicated network connection from server to server), heartbeat's detection mechanism didn't pick up on that and therefore didn't failover. Has anyone seen this? Is there something that I need to configure to have more robust cluster failover? DRBD seems to otherwise work fine (had to resync when I rebooted the old primary), but without good failover, it's use is limited. heartbeat 3.0.4 drbd84 RHEL 6.1 We are not using Pacemaker nfs03 is the primary server in this setup, and nfs01 is the secondary. ha.cf # Hearbeat Logging logfacility daemon udpport 694 ucast eth0 192.168.10.47 ucast eth0 192.168.10.42 # Cluster members node nfs01.openair.com node nfs03.openair.com # Hearbeat communication timing. # Sets the triggers and pulse time for swapping over. keepalive 1 warntime 10 deadtime 30 initdead 120 #fail back automatically auto_failback on and here is the haresources file: nfs03.openair.com IPaddr::192.168.10.50/255.255.255.0/eth0 drbddisk::data Filesystem::/dev/drbd0::/data::ext4 nfs nfslock

    Read the article

  • Why doesn't Mail.app properly thread Microsoft Outlook replies?

    - by thepurplepixel
    I use Mail.app 3.6 (on 10.5 Leopard), and I always use plain-text email. Normally, when I receive an email reply from practically anyone, it looks like this (test message sent from Mail.app, replied from Hotmail, replied from Google Apps): Needless to say, I quite like this threading, and it keeps everything very visually-organized. However, when I receive a plain-text email reply from people on Microsoft Outlook (tested with Outlook 2003 & 2007), it isn't threaded like the image. Instead, it appears as below, without being threaded nicely. My reply to the message. -----Original Message----- From: Tyson Moore [mailto:[email protected]] Sent: [Date] To: [Receiver] Subject: Test Original message. Through looking at the source of my original message, it appears as though every time the message is quoted, less-than signs (<) are inserted before every line in the reply. I am assuming this is what triggers the quoting behaviour exhibited by Mail.app, but I'm no expert. My question: is this a Mail.app limitation in not recognizing the -----Original Message----- line put in by Outlook, or an Outlook problem not inserting > before every line in a reply (or both)?

    Read the article

  • In Icinga (Nagios), how do I configure hosts with multiple IPs?

    - by gertvdijk
    I'm setting up Icinga (Nagios fork) and I have some machines with multiple interfaces. Some services are only listening on one of them and to check them correctly, I like to know if it's possible to have multiple IP addresses configured for a single host in Icinga. Here's a minimal example: Remote Server: eth0: 1.2.3.4 (public IP) eth1: 10.1.2.3 (private IP, secure tunnel) Apache listening on 1.2.3.4:80. (public only) OpenSSH listening on 10.1.2.3:22. (internal network only) Postfix SMTP listening on 0.0.0.0:25 (all interfaces) Icinga Server: eth0: 10.2.3.4 (private IP, internet access) Now if I define a host: define host { use generic-host host_name server1 alias server1.gertvandijk.net address 10.1.2.3 } This will not check the HTTP status correctly. And defining an additional host: define host { use generic-host host_name server1-public alias server1.gertvandijk.net address 1.2.3.4 } will check everything, but shows up as two independent hosts. Now I want to 'aggregate' these two hosts to show up as a single host, yet providing an easy configuration to check the services on their proper address. What is the most elegant number-of-configuration-lines-saving solution to this? I read about several plugins available to workaround this, but I can't figure out what is the current way to address it. Solutions go back to 2003, but I'm running Icinga 1.7.1, already capable of the address6 option, yet that triggers IPv6-only resolving on the hostname... Ideally, I wish to configure Icinga to be intelligent enough to know that the Postfix instance running on 10.1.2.3:25 is the same as 1.2.3.4:25 and thus not triggering two alarms. I guess this must have been tackled before and sysadmins have it set up now. Please share your solution to this. Thanks! :)

    Read the article

  • pam_tally2 causing unwanted lockouts with SCOM or Nervecenter

    - by Chris
    We use pam_tally2 in our system-auth config file which works fine for users. With services such as SCOM or Nervecenter it causes lockouts. Same behavior on RHEL5 and RHEL6 This is /etc/pam.d/nervecenter #%PAM-1.0 # Sample NerveCenter/RHEL6 PAM configuration # This PAM registration file avoids use of the deprecated pam_stack.so module. auth include system-auth account required pam_nologin.so account include system-auth and this is /etc/pam.d/system-auth auth sufficient pam_centrifydc.so auth requisite pam_centrifydc.so deny account sufficient pam_centrifydc.so account requisite pam_centrifydc.so deny session required pam_centrifydc.so homedir password sufficient pam_centrifydc.so try_first_pass password requisite pam_centrifydc.so deny auth required pam_tally2.so deny=6 onerr=fail auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 minclass=3 minlen=8 lcredit=1 ucredit=1 dcredit=1 ocredit=1 difok=1 password sufficient pam_unix.so sha512 shadow try_first_pass use_authtok remember=8 password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so The login does work but it also triggers the pam_tally counter up until it hits 6 "false" logins. Is there any pam-ninjas around that could spot the issue? Thanks.

    Read the article

  • what is the reason behind window service stopped ,whether its due to LAN problems or any other issues

    - by Steve
    I have a windowservice which named Trunk which stopped one day i just want to know the reason behind it? this is an entry in the logs, Nov 15 17:54:04.318 :Trunk-1516:Trunk:handle_control_event:Received CTRL_LOGOFF_EVENT, ignore it Nov 25 15:54:52.157 :Trunk-1516:Trunk:ERROR - Process Restart Count (5) Exceeded for:C:\Program Files\secon\11.1.4\bin\vmd Nov 25 15:54:52.157 :Trunk-1516:Trunk:Stopping Trunk ... Nov 25 15:54:52.314 :Trunk-1516:Trunk:Shutting down, signaled C:\Program F Nov 20 15:54:20.345 :SCBridge.RegisterBridge:Exception in method: ScUtility.ScCommandException (0xa08990002): Exception from HRESULT: 0xa08990002 Supplemental Information: None available. at ScServer.ScServiceProcessorRegistryManager.Attach(String serviceProcessor, ScClientInformation clientInfo, FORCE_ATTACH_SPEC forceAttachToMaster) at ScServer.ScServiceProcessorRegistry.Attach(String serviceProcessor, Object clientInfo) at ScServer.ScServiceProcessorRegistry.Attach(String serviceProcessor) at ServerControlInterface.SCBridge.RegisterBridge(String SPName) for system APOLLOSP0 attempting to attach and register with the Bridge i had seen service is registered with specific account, so i thought that user logged off from the machine that may be the reason behind it or any LAN disconnection problem . But Having taken another look at the above entry we seem to have a constant failure being generated in vmd which causes Trunk to detect vmd requires a restart. Most of the time it works OK and the restart count is anything up to 4. In this case the Trunk log confirms that the Restart Count is 5 and so is considered to be exceeded. Presumably, this triggers the termination of the other services and Trunk is actually doing its job.So, coould this just be a timing issue and we need to increase the tolerance level (i.e restart count) or do we need to address the 0xa08990002 error in vmd?

    Read the article

  • How do I keep a table in Sync across 4 db's to be used in SQL Replication Filtering?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app."

    Read the article

  • EC2 Auto-Scaling with Spot and On-Demand Instances?

    - by platforms
    I'm looking to optimize the cost of our auto-scaling EC2 groups by having them launch spot instances instead of on-demand instances. What I really want is to be able to keep some servers in the group as on-demand instances, regardless of what happens to the spot instance pricing market. Then I want any additional servers in the group, above my configured minimum, to be spot instances. I'm generally OK with the delay in adding servers via spot requests. I can't seem to find any way to do this and I've tried to scour the AWS documentation. It appears that an ASG can either be on-demand or spot, but not a hybrid. I could possibly manually add an on-demand instance to the Elastic Load Balancer assigned to the auto-scaling group, but then the load of that server would not be factored into the auto-scaling measurements and triggers. I suppose I could enter a ridiculously high bid price in order to ensure that I always get the servers I need, but then I look at the pricing history and see occasional large spikes. The AWS documentation is at odds with itself, since in one place it says that if you enter a server minimum, that number is "ensured" to be there. But then when you read about spot instances, there are no assurances. The price differential for spot is compelling, so I'd like to leverage that as much as I can while still maintaining an always-on baseline. Is this possible?

    Read the article

  • How do I get around this lambda expression outer variable issue?

    - by panamack
    I'm playing with PropertyDescriptor and ICustomTypeDescriptor (still) trying to bind a WPF DataGrid to an object, for which the data is stored in a Dictionary. Since if you pass WPF DataGrid a list of Dictionary objects it will auto generate columns based on the public properties of a dictionary (Comparer, Count, Keys and Values) my Person subclasses Dictionary and implements ICustomTypeDescriptor. ICustomTypeDescriptor defines a GetProperties method which returns a PropertyDescriptorCollection. PropertyDescriptor is abstract so you have to subclass it, I figured I'd have a constructor that took Func and an Action parameters that delegate the getting and setting of the values in the dictionary. I then create a PersonPropertyDescriptor for each Key in the dictionary like this: foreach (string s in this.Keys) { var descriptor = new PersonPropertyDescriptor( s, new Func<object>(() => { return this[s]; }), new Action<object>(o => { this[s] = o; })); propList.Add(descriptor); } The problem is that each property get's its own Func and Action but they all share the outer variable s so although the DataGrid autogenerates columns for "ID","FirstName","LastName", "Age", "Gender" they all get and set against "Gender" which is the final resting value of s in the foreach loop. How can I ensure that each delegate uses the desired dictionary Key, i.e. the value of s at the time the Func/Action is instantiated? Much obliged. Here's the rest of my idea, I'm just experimenting here these are not 'real' classes... // DataGrid binds to a People instance public class People : List<Person> { public People() { this.Add(new Person()); } } public class Person : Dictionary<string, object>, ICustomTypeDescriptor { private static PropertyDescriptorCollection descriptors; public Person() { this["ID"] = "201203"; this["FirstName"] = "Bud"; this["LastName"] = "Tree"; this["Age"] = 99; this["Gender"] = "M"; } //... other ICustomTypeDescriptor members... public PropertyDescriptorCollection GetProperties() { if (descriptors == null) { var propList = new List<PropertyDescriptor>(); foreach (string s in this.Keys) { var descriptor = new PersonPropertyDescriptor( s, new Func<object>(() => { return this[s]; }), new Action<object>(o => { this[s] = o; })); propList.Add(descriptor); } descriptors = new PropertyDescriptorCollection(propList.ToArray()); } return descriptors; } //... other other ICustomTypeDescriptor members... } public class PersonPropertyDescriptor : PropertyDescriptor { private Func<object> getFunc; private Action<object> setAction; public PersonPropertyDescriptor(string name, Func<object> getFunc, Action<object> setAction) : base(name, null) { this.getFunc = getFunc; this.setAction = setAction; } // other ... PropertyDescriptor members... public override object GetValue(object component) { return getFunc(); } public override void SetValue(object component, object value) { setAction(value); } }

    Read the article

  • Cart coding help

    - by user228390
    I've made a simple Javascript code for a shopping basket but now I have realised that I have made a error with making it and don't know how to fix it. What I have is Javascript file but I have also included the images source and the addtocart button as well, but What I am trying to do now is make 2 files one a .HTML file and another .JS file, but I can't get it to because when I make a button in the HTML file to call the function from the .JS file it won't work at all. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <HTML> <HEAD><TITLE>shopping</TITLE> <META http-equiv=Content-Type content="text/html; charset=UTF-8"> <STYLE type=text/CSS> fieldset{width:300px} legend{font-size:24px;font-family:comic sans ms;color:#004455} </STYLE> <META content="MSHTML 6.00.2900.2963" name=GENERATOR></HEAD> <BODY scroll="auto"> <div id="products"></div><hr> <div id="inCart"></div> <SCRIPT type="text/javascript"> var items=['Xbox~149.99','StuffedGizmo~19.98','GadgetyGoop~9.97']; var M='?'; var product=[]; var price=[]; var stuff=''; function wpf(product,price){var pf='<form><FIELDSET><LEGEND>'+product+'</LEGEND>'; pf+='<img src="../images/'+product+'.jpg" alt="'+product+'" ><p>price '+M+''+price+'</p> <b>Qty</b><SELECT>'; for(i=0;i<6;i++){pf+='<option value="'+i+'">'+i+'</option>'} pf+='</SELECT>'; pf+='<input type="button" value="Add to cart" onclick="cart()" /></FIELDSET></form>'; return pf } for(j=0;j<items.length;j++){ product[j]=items[j].substring(0,items[j].indexOf('~')); price[j]=items[j].substring(items[j].indexOf('~')+1,items[j].length); stuff+=''+wpf(product[j],price[j])+''; } document.getElementById('products').innerHTML=stuff; function cart(){ var order=[]; var tot=0 for(o=0,k=0;o<document.forms.length;o++){ if(document.forms[o].elements[1].value!=0){ qnty=document.forms[o].elements[1].value; order[k]=''+product[o]+'_'+qnty+'*'+price[o]+''; tot+=qnty*price[o];k++ } } document.getElementById('inCart').innerHTML=order.join('<br>')+'<h3>Total '+tot+'</h3>'; } </SCRIPT> <input type="button" value="Add to cart" onclick="cart()" /></FIELDSET></form> </BODY></HTML>

    Read the article

  • How can records be deleted without activating the delete trigger?

    - by Servaas Phlips
    Hello there, Since about a month we are experiencing records that are disappearing from our database without any reason. (part of) Our database structure is at http://i.imgur.com/i15nG.png Now users and credentials can never be deleted. We noticed however that thanks to our backups that unfortanetely users disappeared from the database. The users and credentials that disappear appear to be completely random. In order to find out which application deletes this records we created triggers with the following checks: CREATE TRIGGER Credential_SoftDelete ON [Credential] INSTEAD OF DELETE AS DECLARE @message nvarchar(255) DECLARE @hostName nvarchar(30) DECLARE @loginName nvarchar(30) DECLARE @deletedId nvarchar(30) SELECT @deletedId=credentialid FROM deleted; SELECT @hostName=host_name,@loginName=login_name FROM sys.dm_exec_sessions WHERE session_id=@@SPID; SELECT @message = '[FAULT] Credential : ' + USER_NAME() + ' deleted ' +@deletedId + ' on ' + @@SERVERNAME + ' from [' + @hostname + ' by ' + @loginName; EXEC xp_logevent 50001,@message,ERROR GO Now after we added this trigger we hoped to find out which application deletes these credentials by searching in the log files. Unfortanetely the credentials are still deleted and the trigger Credential_SoftDelete is never logged. I did try run a delete on the database where the trigger is installed and where the users have disappeared. I ran the following query on the database: DELETE FROM [User] WHERE userid=296 and the trigger prevented deletion of this user and also logged this in the log events. This was actually on exact the same database where the users disappeared. (so no test copy or something like that) Please note that we also use replication, the type of replication we use is merge replication. How is this possible? Can the fact that we use replication on this database be the cause of this problem?

    Read the article

  • Controlling TV Channel Through Computer

    - by killianmcc
    I'm passing my TV input through my computer so that I can use the video stream in an application I'm creating which then outputs to my TV. E.g. Sky Digibox/FreeView box - Laptop - TV Where on my laptop I'll be using the stream in a WPF application so I can overlay XAML objects onto it. My question is, what would be the best way for me to send the signal back to the box to say change the channel for example? I don't want to have to use the remote, I want the computer to handle everything. Is there a standard cable these boxes have that could take what would normally be a remote control signal and use that as input instead? Or would I have to go down the route of looking at some sort of Infared LED to send the signal recreating the remote? Apologies if this is not clear enough, let me know and I'll try and be more precise.

    Read the article

  • Routing connections through VPN based on hostname (not IP range)

    - by Michal M
    This bugs me immensly. I need to connect to client's network through VPN. But I definitely do not want to send all the traffic through client's network so this option is out of question. What I need basically is for the OS to know that all client's network subdomains (*.example.com) need to go through the VPN connection. I tried a couple of options: Changing order of services and setting the VPN on top, but this works the same as "Send all traffic over VPN connection". Using "VPN on Demand" option from network advanced options, but this feature is quite rubbish to be honest. Seems to work only in Safari (?!) and it doesn't route the connection, but it basically triggers the OS to connect to the selected VPN. The reason I need it to work based on hostnames rather than IP range is simple - my client has a lot of servers inside his network and it's impossible for me to remember all IPs. They are all within a range, but this doesn't help me remembering. Another option would be to put the VPN connection on the bottom of network services and untick "Send all traffic..." and then put all known hostnames in hosts file, but considering there could be hundreds of servers (therefore hostnames and ips too) it ridiculous job. And if new server appears on the network I'd need to edit the hosts file again. Sisyphean labours. However this works on Windows very simply. If a hostname is not available through default network interface, then it seems to try VPN connection and this works brilliantly. So, how can I achieve that on Mac, then? I know client's internal DNS addresses if that is of any help (like directing a certain domains through a different DNS)? PS. Using latest version 10.6.6. PS2. I am using VPN to access intranet, version control servers (svn://), samba shares and for SSH access to servers.

    Read the article

  • WinRT WebView and Cookies

    - by javarg
    Turns out that WebView Control in WinRT is much more limited than it’s counterpart in WPF/Silverlight. There are some great articles out there in how to extend the control in order for it to support navigation events and some other features. For a personal project I'm working on, I needed to grab cookies a Web Site generated for the user. Basically, after a user authenticated to a Web Site I needed to get the authentication cookies and generate some extra requests on her behalf. In order to do so, I’ve found this great article about a similar case using SharePoint and Azure ACS. The secret is to use a p/invoke to native InternetGetCookieEx to get cookies for the current URL displayed in the WebView control.   void WebView_LoadCompleted(object sender, NavigationEventArgs e) { var urlPattern = "http://someserver.com/somefolder"; if (e.Uri.ToString().StartsWith(urlPattern)) { var cookies = InternetGetCookieEx(e.Uri.ToString()); // Do something with the cookies } } static string InternetGetCookieEx(string url) { uint sizeInBytes = 0; // Gets capacity length first InternetGetCookieEx(url, null, null, ref sizeInBytes, INTERNET_COOKIE_HTTPONLY, IntPtr.Zero); uint bufferCapacityInChars = (uint)Encoding.Unicode.GetMaxCharCount((int)sizeInBytes); // Now get cookie data var cookieData = new StringBuilder((int)bufferCapacityInChars); InternetGetCookieEx(url, null, cookieData, ref bufferCapacityInChars, INTERNET_COOKIE_HTTPONLY, IntPtr.Zero); return cookieData.ToString(); }   Function import using p/invoke follows: const int INTERNET_COOKIE_HTTPONLY = 0x00002000; [DllImport("wininet.dll", CharSet = CharSet.Unicode, SetLastError = true)] static extern bool InternetGetCookieEx(string pchURL, string pchCookieName, StringBuilder pchCookieData, ref System.UInt32 pcchCookieData, int dwFlags, IntPtr lpReserved); Enjoy!

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >