Search Results

Search found 11259 results on 451 pages for 'extended properties'.

Page 104/451 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • How to Setup Sharepoint Extranet to authenticate against a dmz AD

    - by Satish
    I have a web app which is extended to extranet for our clients to access. We have setup a different AD server and domain for dmz and clients have to be authenticated against that domain. I'm little confused about the setup especially what all web.config files I have to update. Do I have to update the web.config file for Central admin site and the extended web app. According to this blog I need to do update both, but as soon as I make the changes in the web.config for central admin, central admin site stops working. Here is what I added to the central admin web.config file between /Sharepoint and system.web I have this <connectionStrings> <add name="DMZConnectionString" connectionString= "LDAP://dmz.xxx.com:389/OU=Clients,DC=dmz,DC=xxx,DC=com "/> Between system.web and securityPolicy <membership defaultProvider=”DMZADProvider“> <providers> <add name="DMZADProvider" connectionStringName="DMZConnectionString" connectionUsername="DMZ\ldapUser" connectionPassword="Password" enableSearchMethods="true" attributeMapUsername="userPrincipalName" type="System.Web.Security.ActiveDirectoryMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </providers> </membership> I know the connectionusername and password works becuase I use the same in SSP for importing profiles. Any idea what might be causing the error?

    Read the article

  • Can anyone explain these differences between two similar i7 processors? [closed]

    - by Brian Frost
    I have two systems I've just built. They both have i7 processors and Asus P8Z77 motherboards. When I run a simple processor loop benchmark that I wrote in Delphi some time back I get one machine showing nealry twice as fast as the other. I then used CPU-Z to dump me the details of the hardware and I see that the fast machine shows: Processor 1 ID = 0 Number of cores 4 (max 8) Number of threads 8 (max 16) Name Intel Core i7 2700K Codename Sandy Bridge Specification Intel(R) Core(TM) i7-2700K CPU @ 3.50GHz Package (platform ID) Socket 1155 LGA (0x1) CPUID 6.A.7 Extended CPUID 6.2A Core Stepping D2 Technology 32 nm TDP Limit 95 Watts Core Speed 3610.7 MHz Multiplier x FSB 36.0 x 100.3 MHz Stock frequency 3500 MHz the slow machine shows: Processor 1 ID = 0 Number of cores 4 (max 8) Number of threads 8 (max 16) Name Intel Core i7 2600K Codename Sandy Bridge Specification Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz Package (platform ID) Socket 1155 LGA (0x1) CPUID 6.A.7 Extended CPUID 6.2A Core Stepping D2 Technology 32 nm TDP Limit 95 Watts Core Speed 1648.2 MHz Multiplier x FSB 16.0 x 103.0 MHz Stock frequency 3400 MHz i.e the slow machine has a 2600k to the fast machine 2700k. The very different "Multiplier x FSB" must be significant but I dont understand how two processors with a very 'similar' number can be so different. To get the machines the same must I copy the processors or is there some clever setting that I can change? Thanks for any help. Brian.

    Read the article

  • Cisco ASA 8.2 ACL For NAT

    - by javano
    Sadly I have gone back in time to ASA 8.2(5)33 which I am not so familiar with. I have configured NAT between two interfaces but traffic isn't passing becasue I can't get the ACL to work; (The full config which isn't very big is here but to keep this post tidy I have just pasted the important parts below); interface Ethernet0/0 switchport access vlan 108 ! interface Ethernet0/6 switchport access vlan 104 ! interface Ethernet0/7 switchport access vlan 105 ! interface Vlan104 description BUILDING2 nameif BUILDING2 security-level 0 ip address 10.104.0.1 255.255.255.0 ! interface Vlan105 description BUILDING1 nameif BUILDING1 security-level 0 ip address 10.105.0.1 255.255.255.0 ! interface Vlan108 description Main LAN VLAN nameif lan security-level 0 ip address 172.22.0.215 255.255.255.0 ! object-group network obj_net_Remote_Hosts network-object host 111.111.111.3 network-object host 111.111.111.65 object-group network obj_host_pc1_eth1 network-object host 10.104.0.111 object-group network obj_host_pc2_eth1 network-object host 10.104.0.112 object-group network obj_host_pc3_eth1 network-object host 10.104.0.106 object-group network obj_host_pc4_eth1 network-object host 10.104.0.107 object-group network obj_net_PCs description IPs of PCs group-object obj_host_pc1_eth1 group-object obj_host_pc2_eth1 group-object obj_host_pc3_eth1 group-object obj_host_pc4_eth1 access-list acl_NAT_pc1_91 extended permit tcp host 10.104.0.111 host 111.111.111.3 eq 8101 access-list acl_Permit_PCs extended permit tcp object-group obj_net_PCs object-group obj_net_Remote_Hosts eq 8101 ! global (BUILDING1) 11 111.111.222.91 netmask 255.255.255.255 nat (BUILDING2) 11 access-list acl_NAT_pc1_91 access-group acl_Permit_PCs in interface BUILDING2 route BUILDING1 111.111.111.3 255.255.255.255 10.105.0.2 1 route BUILDING1 111.111.111.65 255.255.255.255 10.105.0.2 1 When I try and connect from PC1 to ip 111.111.111.3 I see the following error logged on the ASA console; %ASA-2-106001: Inbound TCP connection denied from 10.104.0.111/38495 to 111.111.111.3/8101 flags SYN on interface blades What the duce!

    Read the article

  • How can I recover XFS partitions from a formatted HD?

    - by giuprivite
    I deleted the partition table of my HD. I wanted to format another one, but by mistake, I formatted the wrong one. Then I also created some new partition on it. Now I would like, if possible, to recover my old data. The old configuration was this: A primary NTFS partition with Windows, and a secondary partition with four logical partitions: a swap and three XFS partitions (two for Ubuntu and OpenSuSE, and one with the home for both systems). This is the output I get when I run gpart in a terminal: ubuntu@ubuntu:~$ sudo gpart /dev/sdb Begin scan... Possible partition(Windows NT/W2K FS), size(39997mb), offset(0mb) Possible extended partition at offset(39997mb) Possible partition(Linux swap), size(8189mb), offset(39997mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(48187mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(89149mb) Possible partition(SGI XFS filesystem), size(175044mb), offset(130112mb) End scan. Checking partitions... Partition(OS/2 HPFS, NTFS, QNX or Advanced UNIX): primary Partition(Linux swap or Solaris/x86): logical Partition(Linux ext2 filesystem): logical Partition(Linux ext2 filesystem): orphaned logical Partition(Linux ext2 filesystem): orphaned logical Ok. Guessed primary partition table: Primary partition(1) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) size: 39997mb #s(81915360) s(63-81915422) chs: (0/1/1)-(1023/254/63)d (0/1/1)-(5098/254/51)r Primary partition(2) type: 015(0x0F)(Extended DOS, LBA) size: 265245mb #s(543221849) s(81915435-625137283) chs: (1023/254/63)-(1023/254/63)d (5099/0/1)-(38912/254/2)r Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Looking the first eight lines, it seems the data are still there... but I don't know how to recover them. I have a free second HD of about 500 GB (the formatted one is 320 GB) that I can use for the recovery process.

    Read the article

  • How can I copy a VMware Fusion virtual machine to a FAT32 partition?

    - by Michael Prescott
    I created the virtual machine on a host running OS X. I then moved the machine to a FAT32 partition on an external drive. It moved the first time without error. Then I moved it from the external drive to a host running Ubuntu 9.10. I had to move to a FAT32 partition first because Ubuntu doesn't recognize Mac OS Extended partitions on the drive. So, the virtual machine (vm) ran on the ubuntu host for a while and then I moved it back to the FAT32 partition and from there back to the OS X host. I worked on the vm for a while on the OS X host and then attempted to move it back to the FAT32 partition. I get the following system error: The Finder can’t complete the operation because some data in “my-virtual-machine” can’t be read or written. (Error code -36) Interestingly, I can move the file to another OS X partition, just not FAT32. I also perused VMware's forums and found advice to set permissions on all files and folders to 777. I did this, but have had no success. I notice the the files within the vm package are 777 now, but there is an extended attributes symbol on their permission details "rwxrwxrwx@" Since I can copy the vm between OS X partitions, but not to non OS X partitions, and all files and folders withing the vm package and the package itself have permissions of 777, I speculate that the "@" is the problem. How can I remove the "@" or is there something else I need to modify to allow me to copy/move the vm to other hosts?

    Read the article

  • What is the difference between the Linux and Linux LVM partition type?

    - by ujjain
    Fdisk shows multiple partition types. What is the difference between choosing 83) Linux and 8e) Linux LVM? Choosing 83) Linux also works fine for using LVM, even creating a physical volume on /dev/sdb without a partition table works. Does picking a partition type in fdisk really matter? What is the difference in picking Linux or Linux LVM as partition type? [root@tst-01 ~]# fdisk /dev/sdb WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): l 0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris 1 FAT12 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT- 2 XENIX root 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT- 3 XENIX usr 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT- 4 FAT16 <32M 41 PPC PReP Boot 85 Linux extended c7 Syrinx 5 Extended 42 SFS 86 NTFS volume set da Non-FS data 6 FAT16 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / . 7 HPFS/NTFS 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility 8 AIX 4f QNX4.x 3rd part 8e Linux LVM df BootIt 9 AIX bootable 50 OnTrack DM 93 Amoeba e1 DOS access a OS/2 Boot Manag 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O b W95 FAT32 52 CP/M 9f BSD/OS e4 SpeedStor c W95 FAT32 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs e W95 FAT16 (LBA) 54 OnTrackDM6 a5 FreeBSD ee GPT f W95 Ext'd (LBA) 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/ 10 OPUS 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b 11 Hidden FAT12 5c Priam Edisk a8 Darwin UFS f1 SpeedStor 12 Compaq diagnost 61 SpeedStor a9 NetBSD f4 SpeedStor 14 Hidden FAT16 <3 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary 16 Hidden FAT16 64 Novell Netware af HFS / HFS+ fb VMware VMFS 17 Hidden HPFS/NTF 65 Novell Netware b7 BSDI fs fc VMware VMKCORE 18 AST SmartSleep 70 DiskSecure Mult b8 BSDI swap fd Linux raid auto 1b Hidden W95 FAT3 75 PC/IX bb Boot Wizard hid fe LANstep 1c Hidden W95 FAT3 80 Old Minix be Solaris boot ff BBT 1e Hidden W95 FAT1 Command (m for help):

    Read the article

  • How can I recover XFS partitions from a formatted HD?

    - by giuprivite
    I deleted the partition table of my HD. I wanted to format another one, but by mistake, I formatted the wrong one. Then I also created some new partition on it. Now I would like, if possible, to recover my old data. The old configuration was this: A primary NTFS partition with Windows, and a secondary partition with four logical partitions: a swap and three XFS partitions (two for Ubuntu and OpenSuSE, and one with the home for both systems). This is the output I get when I run gpart in a terminal: ubuntu@ubuntu:~$ sudo gpart /dev/sdb Begin scan... Possible partition(Windows NT/W2K FS), size(39997mb), offset(0mb) Possible extended partition at offset(39997mb) Possible partition(Linux swap), size(8189mb), offset(39997mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(48187mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(89149mb) Possible partition(SGI XFS filesystem), size(175044mb), offset(130112mb) End scan. Checking partitions... Partition(OS/2 HPFS, NTFS, QNX or Advanced UNIX): primary Partition(Linux swap or Solaris/x86): logical Partition(Linux ext2 filesystem): logical Partition(Linux ext2 filesystem): orphaned logical Partition(Linux ext2 filesystem): orphaned logical Ok. Guessed primary partition table: Primary partition(1) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) size: 39997mb #s(81915360) s(63-81915422) chs: (0/1/1)-(1023/254/63)d (0/1/1)-(5098/254/51)r Primary partition(2) type: 015(0x0F)(Extended DOS, LBA) size: 265245mb #s(543221849) s(81915435-625137283) chs: (1023/254/63)-(1023/254/63)d (5099/0/1)-(38912/254/2)r Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Looking the first eight lines, it seems the data are still there... but I don't know how to recover them. I have a free second HD of about 500 GB (the formatted one is 320 GB) that I can use for the recovery process.

    Read the article

  • SSL timeout on some sites, across all browsers, on Mac OS X Snow Leopard

    - by dansays
    For the past several weeks, I've been receiving "Error 7 (net::ERR_TIMED_OUT): The operation timed out" when I attempt to connect to either Twitter or Paypal via SSL. I get this specific error in Google Chrome, but the same problem occurs in both Safari and Firefox. Other sites work fine, and other computers on my network can access these two sites. I have no firewall settings that would prevent me from accessing these sites over port 443. I notice that both Twitter and Paypal both have "Verisign Class 3 Extended Validation SSL CA" certificates. It is unclear whether this is related to the problem. In an effort to troubleshoot, I attempted to open the test sites referenced on Verisign's root certificate support page, which worked fine. Just to be sure, I downloaded and installed the root package file and installed all included Verisign certificates. No joy. I feel like I've hit a dead end. Any ideas? Update the first: I also cannot connect to FedEx.com, who also has a Verisign Class 3 Extended Validation cert. Update the second: Aaaaaaand it fixed itself. I did nothing. Or, I did something that worked, but in a delayed fashion. Frustrating, but a win is a win. I'll take it.

    Read the article

  • Problem routing between directly connected Subnets w/ ASA-5510

    - by Zephyr Pellerin
    This is an issue I've been struggling with for quite some time, with a seemingly simple answer (Aren't all IT problems?). And that is the problem of passing traffic between two directly connected subnets with an ASA While I'm aware that best practice is to have Internet - Firewall - Router, in many cases this isn't possible. For example, In have an ASA with two interfaces, named OutsideNetwork (10.19.200.3/24) and InternalNetwork (10.19.4.254/24). You'd expect Outside to be able to get to, say, 10.19.4.1, or at LEAST 10.19.4.254, but pinging the interface gives only bad news. Result of the command: "ping OutsideNetwork 10.19.4.254" Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.19.4.254, timeout is 2 seconds: ????? Success rate is 0 percent (0/5) Naturally, you'd assume that you could add a static route, to no avail. [ERROR] route Outsidenetwork 10.19.4.0 255.255.255.0 10.19.4.254 1 Cannot add route, connected route exists At this point, you might gander if its a NAT or Access list problem. access-list Outsidenetwork_access_in extended permit ip any any access-list Internalnetwork_access_in extended permit ip any any There is no dynamic nat (or static nat for that matter), and Unnatted traffic is permitted. When I try pinging the above address (10.19.4.254 from Outsidenetwork), I get this error message from level 0 logging (debugging). Routing failed to locate next hop for icmp from NP Identity Ifc:10.19.200.3/0 to Outsidenetwork:10.19.4.1/0 This led me to set same-security traffic permit, and assigned the same, lesser and greater security numbers between the two interfaces. Am I overlooking something obvious? Is there a command to set static routes that are classified higher than connected routes?

    Read the article

  • How can I update generic non-pnp monitor?

    - by njk
    Background I've been running a KVM switch with my monitor at 1920 x 1080 over VGA for over a year. Did a Windows Update on 12/11/12 which did the following: Update for Windows 7 for x64-based Systems (KB2779562) Security Update for Windows 7 for x64-based Systems (KB2779030) Cumulative Security Update for Internet Explorer 8 for Windows 7 for x64-based Systems (KB2761465) Windows Malicious Software Removal Tool x64 - December 2012 (KB890830) Security Update for Windows 7 for x64-based Systems (KB2753842) Security Update for Windows 7 for x64-based Systems (KB2758857) Security Update for Windows 7 for x64-based Systems (KB2770660) After a restart, my extended monitor was dark. I attempted to reset the extended display configuration, and noticed my monitor was being detected as a Generic Non-PnP Monitor: I uninstalled, downloaded new, and re-installed display drivers. Nothing. I attempted to unplug my monitor from the power for 15 minutes. Nothing. I followed some of the suggestions on this thread; specifically DanM's which suggested to create a new *.inf file and replace that in Device Manager. Device Manager said the "best driver software for your device is already installed". The only thing that works is when the monitor is directly attached to the laptop. This obviously is not what I want. My thought is to somehow remove the Generic Non-PnP Monitor from registry. How would I accomplish this and would this help? Any other suggestions? Relevant Hardware ASUS VE276 Monitor TRENDnet 2-Port USB KVM Switch (TK-207K) HP Laptop w/ ATI Radeon HD 4200 Screens

    Read the article

  • Partitioning & Linux

    - by Zac
    Every tutorial on Linux-based partitioning schemes (or, just partitioning in general) will tell you that a PC can have either 4 primary partitions, or 3 primaries and 1 extended. They will all also tell you that Linux (in my case, Ubuntu) can be installed on either. It's also come to my attention that it is not too atypical for FHS directories, such as usr/, tmp/, etc/, home/ or var/ to be mounted separately on other partitions. Several questions I am unable to find the answers to, purely for my own edification: (1) By "PC", are we really talking about common PC disk types, like IDE or SATA? I guess I'm wondering why PC uses are limited to 4 primaries or 3 primaries + 1 extended (2) I'm choking on some basic OS concepts: it is said that a partition can be mounted by a file system or an OS. So I assume this means I can somehow instruct Ubuntu to mount to 1 partition, and then any part of, say, ReiserFS, to be mounted to another partition? How? (3)(a) What about creating swap partitions? Is there too much of a good thing with swap partitioning? If I have 4GB RAM over 320GB disk, what should my swap partition size be, and why? (3)(b) Are swap files the only way to create swap partitions? Wouldn't a Linux partitioning utility allow me to define a partition as being for virtual memory only? (4) Why are partitions limited to being "mounted" by just OSes and file systems? Why couldn't I write a program to take up its own, say, 512 MB partition, and then have it invoked or uses by an OS installed on another partition? Thanks for shedding any light here... not critical that I know this stuff, but it's got me thinking incessantly. And when I think incessantly, I...can't......sleep....

    Read the article

  • Adobe Reader XI doesn't allow the editing of fields in a document after it's been edited?

    - by leeand00
    One of the users at our company has a *.pdf file she received from the state of Pennsylvania. The version of Adobe Reader she is using is Adobe Reader XI 11.0.3. She uses this pdf file to send in a report. Her workflow goes like this: She makes a copy of the file. She opens the file and the file displays in purple at the top: Please fill out the following form. You can save data typed into this form. Highlight Existing Fields She fills in the specifics by entering values into the form fields. A few weeks later she returns to the same pdf document and can no longer edit the fields, instead she gets the following message: "This document enabled extended features in Adobe Reader. The document has been changed since it was created and use of extended features is no longer available. Please contact the author for the original version of this document." She's also running Windows 7 and I've been told that the issue was once fixed by setting compatibility mode on Adobe Reader XI to Windows XP SP3.

    Read the article

  • Can I clone my hard drive to an external and boot from the clone?

    - by willbuntu
    First thing: I am not asking what software I'm supposed to use. I already know the answer: Ghost (proprietary), Clonezilla, and dd (if I'm careful). What I really want to know is if it is possible to (essentially) bit-for-bit clone my entire installation (OS, installed software, activation(s), etc.) to an external USB hard-drive, and then boot off of that (if I need to, I know how to edit BIOS settings and use Plop boot manager), and work with it day-to-day as if there was virtually no difference from using my internal HDD now. Again, I'm not asking how to install Windows to an external (because I know I'd need to do some special workaround), I'm asking if I can clone everything and boot off of it. In case you're wondering why I'm going to this trouble: I'm using a Lenovo Essentials laptop that has an unmodifiable partition table (due to recovery crap), and has all 4 of its partitions spoken for (3 primary, one extended, cannot change the extended). Anyway, my thought is that if I can clone everything and boot off of it when I need to, and just have a Linux distro on the internal HDD, then that could work.

    Read the article

  • cisco asa + action drop issue

    - by ghp
    Have created a tunnel between 10.x.y.z network and 122.a.b.c ..the tunnel is up and active, but when I try the packet tracer output ..I get the ACTION as drop. I have also enabled same-security-traffic permit intra-interface. Can someone help me what does this drop mean? Result: input-interface: inside input-status: up input-line-status: up output-interface: outside output-status: up output-line-status: up Action: drop Drop-reason: (acl-drop) Flow is denied by configured rule Packet Tracer output @Shane Madden: please find below the packet tracer output. CASA5K-A# CASA5K-A# config t CASA5K-A(config)# packet-tracer input inside tcp 10.x.y.112 0 122.a.b.c 0 Phase: 1 Type: ROUTE-LOOKUP Subtype: input Result: ALLOW Config: Additional Information: in 0.0.0.0 0.0.0.0 outside Phase: 2 Type: ACCESS-LIST Subtype: Result: DROP Config: Implicit Rule Additional Information: Result: input-interface: inside input-status: up input-line-status: up output-interface: outside output-status: up output-line-status: up Action: drop Drop-reason: (acl-drop) Flow is denied by configured rule CASA5K-A(config)# ======================================================================== The access-group are as follows : access-group acl-inbound in interface outside access-group acl-outbound in interface inside and the access-list's are access-list acl-inbound extended permit tcp any any gt 1023 access-list acl-outbound extended permit ip object-group net-Source object net-dest

    Read the article

  • Best format for hard drive for Windows and Mac?

    - by Neil
    I have a 500 GB USB External Hard Drive. I need four partitions on it, for the following purposes: 160 GB for a bootable backup of my Mac. 160 GB for a bootable backup of my Windows. 11 GB for a bootable Snow Leopard Install Disk Rest as for file storage. Now I need a partition table which will get recognised on both Windows and Mac, without needing extra software on Windows, which will let me keep bootable copies of both OS'es, but let me access the file storage from both OS'es. Currently, I have a GUI Partition Table, with Mac OS Extended (Journaled) Partitions for the two backups, Mac OS Extended for the Install Disk, and NTFS for the file storage. While this gets recognised perfectly on my Mac, thanks to an NTFS for Mac driver from Paragon, when connected to Windows, the drive is detected by the machine (listed in Safely Remove USB), but not recognised in Windows Explorer unless I install MacDrive, which is not feasible for me to install on public Windows Machines I might wanna access my storage area on. Can someone recommend the best combination of formats and software/drivers to get this done seamlessly?

    Read the article

  • Dual Monitor + Virtual Desktop software (plus for cube)

    - by xenithorb
    I've recently purchased another monitor, my first one being a TV and being much larger. I now sit at a desk and use my shiny new 24" LED more often, but I like to extend the desktop into the TV. The problem presented with this is to save power and the longevity of my 47" VIZIO, I try to keep it off when possible. What I'm seeking sounds very simple - If any of you have ever used Compiz or Deskspace (Yod'm) - You'll know what im referring to when I talk about a "cube." The most important functionality I'm looking for is the ability to scroll desktop contents between both displays and virtual desktops. Deskspace does and excellent job of presenting an attractive cube, but it creates a separate cube and virtual desktop space for the second extended monitor (now the TV) - Again, what I'm looking to do is scroll between virtual desktops, by passing through both monitors. The net effect of this functionality would allow me to scroll the contents of the extended monitor to the first monitor should a window get caught there without having to turn on the TV. So imagine the horizontal portion of a cube as being actual real monitors - is there anything that allows one to rotate desktops between displays?

    Read the article

  • winforms databinding best practices

    - by Kaiser Soze
    Demands / problems: I would like to bind multiple properties of an entity to controls in a form. Some of which are read only from time to time (according to business logic). When using an entity that implements INotifyPropertyChanged as the DataSource, every change notification refreshes all the controls bound to that data source (easy to verify - just bind two properties to two controls and invoke a change notification on one of them, you will see that both properties are hit and reevaluated). There should be user friendly error notifications (the entity implements IDataErrorInfo). (probably using ErrorProvider) Using the entity as the DataSource of the controls leads to performance issues and makes life harder when its time for a control to be read only. I thought of creating some kind of wrapper that holds the entity and a specific property so that each control would be bound to a different DataSource. Moreover, that wrapper could hold the ReadOnly indicator for that property so the control would be bound directly to that value. The wrapper could look like this: interface IPropertyWrapper : INotifyPropertyChanged, IDataErrorInfo { object Value { get; set; } bool IsReadOnly { get; } } But this means also a different ErrorProvider for each property (property wrapper) I feel like I'm trying to reinvent the wheel... What is the 'proper' way of handling complex binding demands like these? Thanks ahead.

    Read the article

  • JavaMail: Could not connect to SMPT server.

    - by javacode
    Hi The following code causes an error. Please help me understand what's wrong. import javax.mail.*; import javax.mail.internet.*; import java.util.*; public class SendMail { public static void main(String [] args)throws MessagingException { SendMail sm=new SendMail(); sm.postMail(new String[]{"[email protected]"},"hi","hello","[email protected]"); } public void postMail( String recipients[ ], String subject, String message , String from) throws MessagingException { boolean debug = false; //Set the host smtp address Properties props = new Properties(); props.put("mail.smtp.host", "webmail.emailmyname.com"); // create some properties and get the default Session Session session = Session.getDefaultInstance(props, null); session.setDebug(debug); // create a message Message msg = new MimeMessage(session); // set the from and to address InternetAddress addressFrom = new InternetAddress(from); msg.setFrom(addressFrom); InternetAddress[] addressTo = new InternetAddress[recipients.length]; for (int i = 0; i < recipients.length; i++) { addressTo[i] = new InternetAddress(recipients[i]); } msg.setRecipients(Message.RecipientType.TO, addressTo); // Optional : You can also set your custom headers in the Email if you Want msg.addHeader("MyHeaderName", "myHeaderValue"); // Setting the Subject and Content Type msg.setSubject(subject); msg.setContent(message, "text/plain"); Transport.send(msg); } } Exception: Exception in thread "main" javax.mail.MessagingException: Could not connect to SMTP host: webmail.emailmyname.com, port: 25, response: 421 at com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:1694) at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:525) at javax.mail.Service.connect(Service.java:291) at javax.mail.Service.connect(Service.java:172) at javax.mail.Service.connect(Service.java:121) at javax.mail.Transport.send0(Transport.java:190) at javax.mail.Transport.send(Transport.java:120) at SendMail.postMail(SendMail.java:45) at SendMail.main(SendMail.java:9)

    Read the article

  • applicationSettings and Web.config

    - by Eric J.
    I have a DLL that provides logging that I use for WebForms projects and now wish to use it in an ASP.Net MVC 2 project. Some aspects of that DLL are configured in app.config: <configuration> <configSections> <section name="Tools.Instrumentation.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" /> </sectionGroup> </configSections> <applicationSettings> <Tools.Instrumentation.Properties.Settings> <setting name="LogLevel" serializeAs="String"> <value>DEBUG</value> </setting> <setting name="AppName" serializeAs="String"> <value>MyApp</value> </setting> <setting name="Port" serializeAs="String"> <!--value>33333</value--> <value>0</value> </setting> </Tools.Instrumentation.Properties.Settings> </configuration> However, when I create a similar entry in Web.config, I get the error: Unrecognized configuration section applicationSettings My two-part question: How do I make this config entry work in Web.config? Where can I read up on the conceptual differences between WinForms configuration and ASP.Net configuration?

    Read the article

  • ExtJS GridPanel Scrollbar does not appear in IE7 but it does in Firefox, etc

    - by Snowright
    Setup I have an accordion layout containing a "properties" panel that nests two inner panels. The first inner panel holds a Ext.DataView, while the second panel is the Ext.grid.GridPanel in question. In the screenshot below, the white space containing the folder icon is the dataview, and below that is the gridpanel. Problem In Firefox, Chrome, and Opera, there is a scrollbar that appears when my gridpanel has an overflow of properties. It is only in Internet Explorer that it does not appear. I am, however, able to scroll using my mouse scroll button in all browsers, including IE. I've also tried removing our custom css file in case it was affecting it somehow, but there was no change in doing so. I'm not sure exactly what code I should show as I don't know where the exact problem is coming from but here is the code for the mainpanel and gridpanel. var mainPanel = new Ext.Panel({ id : 'main-property-panel', title : 'Properties', height : 350, autoWidth : true, tbar : [comboPropertyActions], items : [panel1] //panel1 holds the DataView }); var propertiesGrid = new Ext.grid.GridPanel({ stripeRows : true, height : mainPanel.getSize().height-iconDataView.getSize().height-mainPanel.getFrameHeight(), autoWidth : true, store : propertiesStore, cm : propertiesColumnModel }) //Add gridpanel to mainPanel mainPanel.add(propertiesGrid); mainPanel.doLayout(); Any help into the right direction would be greatly appreciated. Thank you.

    Read the article

  • Multiple database with Spring+Hibernate+JPA

    - by ziftech
    Hi everybody! I'm trying to configure Spring+Hibernate+JPA for work with two databases (MySQL and MSSQL) my datasource-context.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util"> <!-- Data Source config --> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${local.jdbc.driver}" p:url="${local.jdbc.url}" p:username="${local.jdbc.username}" p:password="${local.jdbc.password}"> </bean> <bean id="dataSourceRemote" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${remote.jdbc.driver}" p:url="${remote.jdbc.url}" p:username="${remote.jdbc.username}" p:password="${remote.jdbc.password}" /> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactory" /> <!-- JPA config --> <tx:annotation-driven transaction-manager="transactionManager" /> <bean id="persistenceUnitManager" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager"> <property name="persistenceXmlLocations"> <list value-type="java.lang.String"> <value>classpath*:config/persistence.local.xml</value> <value>classpath*:config/persistence.remote.xml</value> </list> </property> <property name="dataSources"> <map> <entry key="localDataSource" value-ref="dataSource" /> <entry key="remoteDataSource" value-ref="dataSourceRemote" /> </map> </property> <property name="defaultDataSource" ref="dataSource" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="localjpa"/> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> </beans> each persistence.xml contains one unit, like this: <persistence-unit name="remote" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${remote.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${remote.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> PersistenceUnitManager cause following exception: Cannot resolve reference to bean 'persistenceUnitManager' while setting bean property 'persistenceUnitManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'persistenceUnitManager' defined in class path resource [config/datasource-context.xml]: Initialization of bean failed; nested exception is org.springframework.beans.TypeMismatchException: Failed to convert property value of type [java.util.ArrayList] to required type [java.lang.String] for property 'persistenceXmlLocation'; nested exception is java.lang.IllegalArgumentException: Cannot convert value of type [java.util.ArrayList] to required type [java.lang.String] for property 'persistenceXmlLocation': no matching editors or conversion strategy found If left only one persistence.xml without list, every works fine but I need 2 units... I also try to find alternative solution for work with two databases in Spring+Hibernate context, so I would appreciate any solution new error after changing to persistenceXmlLocations No single default persistence unit defined in {classpath:config/persistence.local.xml, classpath:config/persistence.remote.xml} UPDATE: I add persistenceUnitName, it works, but only with one unit, still need help UPDATE: thanks, ChssPly76 I changed config files: datasource-context.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util"> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${local.jdbc.driver}" p:url="${local.jdbc.url}" p:username="${local.jdbc.username}" p:password="${local.jdbc.password}"> </bean> <bean id="dataSourceRemote" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${remote.jdbc.driver}" p:url="${remote.jdbc.url}" p:username="${remote.jdbc.username}" p:password="${remote.jdbc.password}"> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"> <property name="defaultPersistenceUnitName" value="pu1" /> </bean> <bean id="persistenceUnitManager" class="org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager"> <property name="persistenceXmlLocation" value="${persistence.xml.location}" /> <property name="defaultDataSource" ref="dataSource" /> <!-- problem --> <property name="dataSources"> <map> <entry key="local" value-ref="dataSource" /> <entry key="remote" value-ref="dataSourceRemote" /> </map> </property> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="pu1" /> <property name="dataSource" ref="dataSource" /> </bean> <bean id="entityManagerFactoryRemote" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" p:showSql="true" p:generateDdl="true"> </bean> </property> <property name="persistenceUnitManager" ref="persistenceUnitManager" /> <property name="persistenceUnitName" value="pu2" /> <property name="dataSource" ref="dataSourceRemote" /> </bean> <tx:annotation-driven /> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactory" /> <bean id="transactionManagerRemote" class="org.springframework.orm.jpa.JpaTransactionManager" p:entity-manager-factory-ref="entityManagerFactoryRemote" /> </beans> persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="pu1" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${local.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${local.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> <persistence-unit name="pu2" transaction-type="RESOURCE_LOCAL"> <properties> <property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.DefaultNamingStrategy" /> <property name="hibernate.dialect" value="${remote.hibernate.dialect}" /> <property name="hibernate.hbm2ddl.auto" value="${remote.hibernate.hbm2ddl.auto}" /> </properties> </persistence-unit> </persistence> Now it builds two entityManagerFactory, but both are for Microsoft SQL Server [main] INFO org.hibernate.ejb.Ejb3Configuration - Processing PersistenceUnitInfo [ name: pu1 ...] [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: Microsoft SQL Server [main] INFO org.hibernate.ejb.Ejb3Configuration - Processing PersistenceUnitInfo [ name: pu2 ...] [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: Microsoft SQL Server (but must MySQL) I suggest, that use only dataSource, dataSourceRemote (no substitution) is not worked. That's my last problem

    Read the article

  • Nhibernate Guid with PK MySQL

    - by Andrew Kalashnikov
    Hello colleagues. I've got a question. I use NHibernate with MySql. At my entities I use Id(PK) for my business-logic usage and Guid(for replication). So my BaseDomain: public class BaseDomain { public virtual int Id { get; set; } public virtual Guid Guid { get; set; } public class Properties { public const string Id = "Id"; public const string Guid = "Guid"; } public BaseDomain() { } } My usage domain: public class ActivityCategory : BaseDomain { public ActivityCategory() { } public virtual string Name { get; set; } public new class Properties { public const string Id = "Id"; public const string Guid = "Guid"; public const string Name = "Name"; private Properties() { } } } Mapping: <class name="ActivityCategory, Clients.Core" table='Activity_category'> <id name="Id" unsaved-value="0" type="int"> <column name="Id" not-null="true"/> <generator class="native"/> </id> <property name="Guid"/> <property name="Name"/> </class> But when I insert my entity: [Test] public void Test() { ActivityCategory ac = new ActivityCategory(); ac.Name = "Test"; using (var repo = new Repository<ActivityCategory>()) repo.Save(ac); } I always get '00000000-0000-0000-0000-000000000000' at my Guid field. What should I do for generate right Guid. May be mapping? Thanks a lot!

    Read the article

  • WPF Toolkit Charting and IndependentValueBinding, IndependentValuePath

    - by Joel Barsotti
    So I'm having a problem with the charting engine from the WPF toolkit. We haven't moved our data to a proper object model, so the ItemSource is backed with a DataView. First attempt <chartingToolkit:ScatterSeries x:Name="TargetSeries" DataPointStyle="{StaticResource TargetStyle}" ItemsSource="{Binding Path=TargetSeriesData}" IndependentValueBinding="{Binding Path=TargetSeries_X}" DependentValueBinding="{Binding Path=TargetSeries_X}" /> This crashes because, I believe, it thinks the bindings are the values to plot or some sort of mismatch. Second attempt <chartingToolkit:ScatterSeries x:Name="TargetSeries" DataPointStyle="{StaticResource TargetStyle}" ItemsSource="{Binding Path=TargetSeriesData}" IndependentValuePath="{Binding Path=TargetSeries_X}" DependentValuePath="{Binding Path=TargetSeries_X}" /> This crashes during the init step becaue the Path properties aren't backed with dependency properties and therefore cannot be bound. Third attempt <chartingToolkit:ScatterSeries x:Name="TargetSeries" DataPointStyle="{StaticResource TargetStyle}" ItemsSource="{Binding Path=TargetSeriesData}" IndependentValuePath="targetFooXColumnName" DependentValuePath="targetFooYColumnName" /> Now this works! But I wanted to use the binding so I can switch from using the targetFooXColumnName to the targetFooBarXColumnName. So this solution will cause a whole lot of hacky looking code to switch the Path's manually. Anyway to fix this? Can I use some sort of convertor to get the Binding properties to correctly pull the data from the columns in the DataView? Thanks, Joel

    Read the article

  • NHibernate. Distinct parent child fetching

    - by Andrew Kalashnikov
    Hello. I've got common NH mapping; <class name="Order, SummaryOrder.Core" table='order'> <id name="Id" unsaved-value="0" type="int"> <column name="id" not-null="true"/> <generator class="native"/> </id> <many-to-one name="Client" class="SummaryOrderClient, SummaryOrder.Core" column="summary_order_client_id" cascade="none"/> <many-to-one name="Provider" class="SummaryOrderClient, SummaryOrder.Core" column="summary_order_provider_id" cascade="none"/> <set name="Items" cascade="all"> <key column="order_id"/> <one-to-many class="OrderItem, Clients.Core" /> </set> </class> Want get list by this criteria ICriteria criteria = NHibernateStateLessSession.CreateCriteria(typeof(SummaryOrder.Core.Domains.Order)); ; criteria.Add(Restrictions.Or (Restrictions.Eq(String.Format("{0}.Id", SummaryOrder.Core.Domains.Order.Properties.Client), idClient), Restrictions.Eq(String.Format("{0}.Id", SummaryOrder.Core.Domains.Order.Properties.Provider), idClient))). SetResultTransformer(new DistinctRootEntityResultTransformer()). SetFetchMode(SummaryOrder.Core.Domains.Order.Properties.Items, FetchMode.Join); return criteria.List<SummaryOrder.Core.Domains.Order>() as List<SummaryOrder.Core.Domains.Order> But I've got duplicates.. When I execute One restriction (without OR) I got distinct collection of orders, but Restriction OR brakes my query. I wanna get distinct(at client yet) collection of orders. What's wrong. Please HELP!

    Read the article

  • AutoMapper How To Map Object A To Object B Differently Depending On Context

    - by IanT8
    Calling all AutoMapper gurus! I'd like to be able to map object A to object B differently depending on context at runtime. In particular, I'd like to ignore certain properties in one mapping case, and have all properties mapped in another case. What I'm experiencing is that Mapper.CreateMap can be called successfully in the different mapping cases however, once CreateMap is called, the map for a particular pair of types is set and is not subsequently changed by succeeding CreateMap calls which might describe the mapping differently. I found a blog post which advocates Mapper.Reset() to get round the problem, however, the static nature of the Mapper class means that it is only a matter of time before a collision and crash occur. Is there a way to do this? What I think I need is to call Mapper.CreateMap once per appdomain, and later, be able to call Mapper.Map with hints about which properties should be included / excluded. Right now, I'm thinking about changing the source code by writing a non-static mapping class that holds the mapping config instance based. Poor performance, but thread safe. What are my options. What can be done? Automapper seems so promising.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >