Search Results

Search found 19703 results on 789 pages for 'virtual ip'.

Page 237/789 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • Route outbound connections from local network through VPN

    - by Sharkos
    I have a server A running OpenVPN, an OpenVPN client B (a rooted Android phone as it happens) and a third party C (a laptop, tablet etc.) tethered to B. B can use the VPN to access the internet via A; C can use the tethered connection WITHOUT the VPN to access the internet via B. However, with the VPN on B active, I cannot load information from the internet on C. A appears to log similar traffic inbound and outbound when B or C attempt to load a webpage, say, but the VPN on device B reports no inbound traffic when the connection originated from C. Where should I look for packets being dropped, and what ip rules should I use to make sure they are passed back through the VPN and into the local network B <- C? (I'll obviously post whatever further information is needed.) Further info Without VPN: root@android:/ # ip route default via [B's External Gateway] dev rmnet0 [B's External Subnet] dev rmnet0 proto kernel scope link src [B's External IP] [B's External Gateway] dev rmnet0 scope link 192.168.43.0/24 dev wlan0 proto kernel scope link src 192.168.43.1 With VPN: root@android:/ # ip route 0.0.0.0/1 dev tun0 scope link default via [B's External Gateway] dev rmnet0 [B's External Subnet] dev rmnet0 proto kernel scope link src [B's External IP] [B's External Gateway] dev rmnet0 scope link [External address of A] dev tun0 scope link 128.0.0.0/1 dev tun0 scope link 172.16.0.0/24 dev tun0 scope link 172.16.0.8/30 dev tun0 proto kernel scope link src 172.16.0.10 192.168.43.0/24 dev wlan0 proto kernel scope link src 192.168.43.1 192.168.168.0/24 dev tun0 scope link

    Read the article

  • Windows NLB + IIS - Stops serving pages

    - by Ye Ol Developer
    We are currently running Windows NLB and IIS7 load balanced across two servers. What happens is randomly and sporadically the servers stop serving web pages. What we have noticed is that if we run the sites on a dedicated IP on either of the servers, these issues do not exist. As soon as we switch back to the load balanced IP, then everything goes awry. When the servers stop serving pages, we can still TS into the server and surf them internally without issues, or switch to the dedicated IP. However the internal network cannot even access the files from the load balanced IP. We are running out of idea's here. Has anyone had a similar problem?

    Read the article

  • Route all traffic of home network through VPN

    - by user436118
    I have a typical semi advanced home network scenario: A cable modem - eth A wireless router (netgear n600) eth and wlan A home server (Running ubuntu 12.04 LTS, connected over wlan) A bunch of wireless clients (wlan) Lying around I have anoher cheaper wlan router, and two different USB wlan NIC's that are known to work with Linux. ACTA struck. I want to route ALL of my WAN traffic through a remote server through a VPN. For sake of completition, lets say there is a remote server running debian sqeeze where a VPN server is to be installed. The network is then to behave so that if the VPN is not operative, it is separated from the outside world. I am familiar with general system/network practices, but lack the specific detailed knowledge to accomplish this. Please suggest the right approach, packages and configurations you'd use to reach said solution. I've also envisioned the following network configuration, please improve it if you see fit: ==LAN== Client ip:10.1.1.x nm:255.0.0.0 gw:10.1.1.1 reached via WLAN Wlan router 1: ip: 10.1.1.1 nm:255.0.0.0 gw: 10.10.10.1 reached via ETH Homeserver: <<< VPN is initiated here, and the other endpoint is somewhere on the internet. eth0: ip:10.10.10.1 nm: 0.0.0.0 gw:192.168.0.1 reached via WLAN Homeserver: wlan0: ip: 192.168.0.2 nm: 255.255.255.0 gw: 192.168.0.1 reached via WLAN ==WAN== Wlan router 2: ip: 192.168.0.1 nm: 0.0.0.0 gw: set via dhcp uplink connector: cable modem Cable Modem: Remote DHCP. Has on-board DHCP server for ethernet device that connects to it, and only works this way. All this WLAN fussery is because my home server is located in a part of the house where a cable link isnt possible unfortunately.

    Read the article

  • Is there a postfix mysql virtual_maps append_at_origin workaround so I can pipe to external scripts?

    - by FilmJ
    I am using virtual domains, and I'd like to setup the server to alias to custom scripts. I manage all accounts using postfix mappings to mysql. It seems that postfix automatically appends a virtual domain regardless of how the forwarded/aliased result comes back. So even though i have: "|/bin/command" postfix is reading it as: "|/bin/command"@mydomain.com Is there any work-around, or setting I can fix? It would seem than append_at_myorigin=no would be ideal, but that's unsupported according to the documentation. Another option, maybe I can skip virtual aliases altogether and use the "/etc/postfix/aliases" table - assuming all emails go to the main domain. I'll try this, but if anyone has any other ideas how to make it work with virtual domains, please let me know as this would be very useful! Thanks.

    Read the article

  • remote desktop access

    - by pnp
    I have my work system on the ip range 172.16.xx.yy, and I have my personal system on the ip range 10.0.xx.yy. Both of them, however, are on the same network of my University, but on different LANs/VLANs (i hope i used the right word here). How can I remotely connect to my work system from my PC, given that both use private IP addresses? If such a thing is not possible with current set up, what minimal changes are required for it?

    Read the article

  • Route all traffic of home network through VPN [migrated]

    - by user436118
    I have a typical semi advanced home network scenario: A cable modem - eth A wireless router (netgear n600) eth and wlan A home server (Running ubuntu 12.04 LTS, connected over wlan) A bunch of wireless clients (wlan) Lying around I have anoher cheaper wlan router, and two different USB wlan NIC's that are known to work with Linux. ACTA struck. I want to route ALL of my WAN traffic through a remote server through a VPN. For sake of completition, lets say there is a remote server running debian sqeeze where a VPN server is to be installed. The network is then to behave so that if the VPN is not operative, it is separated from the outside world. I am familiar with general system/network practices, but lack the specific detailed knowledge to accomplish this. Please suggest the right approach, packages and configurations you'd use to reach said solution. I've also envisioned the following network configuration, please improve it if you see fit: Client ip:10.1.1.x nm:255.0.0.0 gw:10.1.1.1 reached via WLAN Wlan router 1: ip: 10.1.1.1 nm:255.0.0.0 gw: 10.10.10.1 reached via ETH Homeserver: <<< VPN is initiated here, and the other endpoint is somewhere on the internet. eth0: ip:10.10.10.1 nm: 0.0.0.0 gw:192.168.0.1 reached via WLAN Homeserver: wlan0: ip: 192.168.0.2 nm: 255.255.255.0 gw: 192.168.0.1 reached via WLAN Wlan router 2: ip: 192.168.0.1 nm: 0.0.0.0 gw: set via dhcp uplink connector: cable modem Cable Modem: Remote DHCP. Has on-board DHCP server for ethernet device that connects to it, and only works this way. All this WLAN fussery is because my home server is located in a part of the house where a cable link isnt possible unfortunately.

    Read the article

  • Server 2012r2 VPN DNS

    - by Tyron Gower
    Have an issue where onsite clients cannot resolve VPNusers. but VPN users can resolve onsite machines. example. USER! uses LAPTOP1 USER1 connects to VPN gets internal IP address of 10.243.0.200 USER1 pings SERVER1 - resolve to ip and gets reply USER1 RDP into SERVER1 (inside VPN) USER1 pings LAPTOP1 from SERVER1 resolves to ip address last assigned by DHCP (10.243.0.139) ping fails USER1 pings 10.243.0.200 from SERVER1 gets reply. Running Server 2012r2 It is a domain controller, DNS and VPN server. VPN is just configured with basic default settings. All VPN users have static IP setup in AD. Not sure where to go from here.

    Read the article

  • Can't connect my server outside my wireless network. server is openERP running on ubuntu 12.04 desktop router is ciso small business router

    - by user2613541
    I've looked on the internet regarding port forwarding. I've successfully fowarded port 8069 to my server's ip address. I can access openERP when I'm connected to the network of my office but not when I'm outside my office's network. What am I missing? my computer's ip address starts with 192... Do I have to first up the router's ip address and then my server's ip address to get to my server from the outside? what should I type in my internet browser? I've looked all day yesterday.

    Read the article

  • Website is not accessible from server which is using proxy

    - by Bhoot
    I hosted a website in a win 2008 R2 server which runs in private domain. I set up bindings for port 80 and 443 for http & https respectively. Created inbound rule for port 80 and 443 also in windows firewall. After doing all this, i am still not able to access my website from remote machine. IE : Internet Explorer cannot display the webpage. Chrome : Oops! Google Chrome could not find xxxxxx Tried accessing website by ip address but no luck. I tried to ping that server but it says TTL expired in Transit. Now i found some more information over internet to check if the server is using any kind of proxy in between. I found my IP address at www.getip.com, but ipconfig/all gives me a different IP address. Is it really a problem if we use proxy ? I am not sure if i have concluded it correctly. But is there any way out to resolve this issue? Update ::: I figured it out. I have to call that website with external IP address. due to the proxy settings i was not able to call that website by the server's IP or name of that machine.

    Read the article

  • LAN full of public ipv4 addresses - How to filter it?

    - by sparc86
    The answer to my question maybe is not that hard but anyways, I do not know what to do. So, I just got in a new job in a Univerisity and I found out that the network (the LAN) is full of public IP addresses. Seriously, the whole LAN (probably more than 150 hosts) has it' own internet IP address and I don't know how to manage it. I have a very good experience using iptables (Linux firewall) in a NAT'ed environment. But then how should I proceed in an environment where all my LAN is working with a bunch of public IP addresses? Should I just use the "forward" rules and ignore the NAT rules or is there any other issue in such environment which I should take care? Can I add a firewall between the router and the LAN in order to produce packet filtering for these public IP addresses in my LAN or will this just not work? Thanks!

    Read the article

  • Azure Linux out of band managment

    - by faker
    I have a Linux (Ubuntu) virtual machine running on Azure. It seems like the only way to connect to it is via SSH. This is OK for normal operation, but what to do when something goes wrong (fsck waiting for user-input, new kernel doesn't boot, mis-configured network, etc.)? There is a grayed out "Connect" button in the management interface, and the help for it says: To access a virtual machine running Windows Server, click Connect and follow the instructions. Enter the password that was set when the virtual machine was created. The Connect button is not available for a virtual machine running Linux, but you can use your favorite SSH program to access it. I've read the documentation on the command line tools, but there is also no way to connect to it. Is there any way for me to get such a console?

    Read the article

  • Destination host unreachable

    - by user1010101
    I have 2 router connected to 1 switch which contains two vlans, 1 router has the ip table of vlan1 and the other router have the ip table of vlan2 I have trunked both router cable to the switch. I have set 1 ip table per router which correspond to the ip address of the PC that have this router address as a gateway. When I ping from 192.168.100.2 to 192.168.200.2 it tells me that destination host is unreachable and the message is from the router 192.168.100.1. So I guess router for 192.168.100.x does not see the router for 192.168.200.x , right ? Or am I wrong ? What are good troubleshooting steps ? http://i.imgur.com/b94Ir.png is the representation of the network, i cannot post image since im not reputated enough.

    Read the article

  • A really unique case of Load Balancing

    - by Shamshun
    I have an internet connection which has limited up/down bandwidth per IP address. What I want to do is to get multiple IP addresses on a "single" LAN interface, and use a load balancer to distribute traffic through them. I was successful at getting 100 ip addresses on a single interface. my problem is that Linux and Windows use the first ip address of an interface by default, so my bandwidth is not increased. I would appreciate if someone tells me what Load Balancing software has the ability to distribute load between multiple IPs on a single interface. I have tried to do so on both Win7 and Backtrack-LinuxR2

    Read the article

  • Iptables and counters

    - by mehturt
    I'm trying to use iptables counters with munin to monitor traffic of hosts on my local subnet. For each host I set up a rule like this: iptables -I OUTPUT -d $ip This should count the packets going from firewall to $ip, correct? I found out that this does not seem to count all packets. I start tcpdump on my router (Linux) and I see packets to $ip that are not counted. For example I check number of packets for rule to my phone IP. I start tcpdump, refresh Gmail on my phoone, I see packets in tcpdump's output but iptables rule counters are not incremented. Then I open a web page on the same phone and the counters are incremented. What could be the reason?

    Read the article

  • Hyper-V Manager - Host Access During a Catastrophe

    - by LonnieBest
    How can I ensure that I can always have Hyper-V Manager access to a Hyper-V server, even in the event that the Active Directory Server is down (in a domain-login environment)? Background: The one that came before me, set up the company's servers as virtual machines on top of a host running Hyper-V Server 6.1 (7601) Service Pack 1. For managing Hyper-V, he installed Window 7 onto a virtual machine (run on the same host) with Hyper-V Manager installed. When the (virtual) Active Directory server (run on this same host) is rebooted, during that reboot, I'm unable to RDP into the Windows 7 virtual machine, and I'm therefore unable to access Hyper-V Manager when the Active Directory server is down. I suspect I can't login because I can't authenticate with the Active Directory Server. I'm going to install Hyper-V Manger onto some addition manager's workstations, but how can I ensure they'll have access in a catastrophe where Active Directory authentication isn't possible?

    Read the article

  • How to set up vpn tunnel (ipsec) connection

    - by Alfwed
    I'm working with a client who wants to set up a vpn tunnel between their network and ours. They're in charge of the tunnel and to give us the access they are asking me my public IP and my LAN IP. This is what i've got when i do an ifconfig on the server i will use to connect to the vpn $ ifconfig eth0 Link encap:Ethernet HWaddr d4:ae:52:cd:xx:xx inet adr:62.210.xxx.xxx Bcast:62.210.xxx.xxx Masque:255.255.255.0 adr inet6: fe80::d6ae:52ff:xxxx:xx/64 Scope:Lien UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Packets reçus:55255032 erreurs:0 :779628 overruns:0 frame:0 TX packets:5419527 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 Octets reçus:5598164393 (5.5 GB) Octets transmis:1034297288 (1.0 GB) Interruption:16 Mémoire:c0000000-c0012800 lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:16436 Metric:1 Packets reçus:45923382 erreurs:0 :0 overruns:0 frame:0 TX packets:45923382 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 The inet adr:62.210.xxx.xxx is my public IP but it seems like i dont have any LAN IP. Can the connection work without LAN IP or should I create a private network somehow?

    Read the article

  • Site down, SSH and FTP work fine

    - by Euskadi
    I cannot access my site through my web browser, using either the domain name or IP address (which also pings fine). My site was working last night when I went to bed, so I can't think what might have done it. I have tried restarting apache to no avail. update: I've noticed that http://[IP Address]/phpymyadmin works but [domain name]/phpmyadmin does not, and the same happens for webmin (https://[IP]:1000). Could this be a problem with BIND?

    Read the article

  • 2 servers, high availability and faster response

    - by user17886
    I recently bought a second webserver because I worry about hardware failure of my old server. Now that I have that second server I wish to do a little more then just have one server standby and replicate all day. As long as it's there I might as well get some advantage our of it ! I have a website powered by ubuntu 12.04, nginx, php-fpm, apc, mysql (5.5) and couchdb. Im currently testing configurations where i can achieve failover AND make good use of the extra harware for faster responses / distributed load. The setup I am testing nowinvolves heartbeat for ip failover and two identical servers. Of the two servers only one has a public ip adress. If one server crashes the other server takes over the public ip adress. On an incoming request nginx forwards the request tot php-fpm to either server a of server b (50/50 if both servers are alive). Once the request has been send to php-fpm both servers look at localhost for the mysql server. I use master-master mysql replication for this. The file system is synced with lsyncd. This works pretty well but Im reading it's discouraged by the (mysql) community. Another option I could think of is to use one server as a mysql master and one server as a web/php server. The servers would still sync their filesystem, would still run the same duplicate software (nginx,mysql) but master slave mysql replication could be used. As long as bother servers are alive I could just prefer nginx to listen to ip a and mysql to ip b. If one server is down, the other server could take over the task of the other server, simply by ip switching. But im completely new at this so I would greatly value your expert advice. Is either of the two setups any good ? If you have any thoughts on this please let me know ! PS, virtualisation, hosting on different locations or active/passive setups are not solutions im looking for. I find virtual server either too slow or too expensive. I already have a passive failover on another location. But in case of a crash I found the site was still unreachable for too long due to dns caching.

    Read the article

  • 64bit VM on Windows 7 x64

    - by argtag
    Is there any way to run a guest x64 VM on Windows 7 x64? No luck with Virtual PC 2007, Virtual Server 2005R2. (both blocked during install). I know the Windows Virtual PC app that comes with Win 7 doesn't allow 64bit Guests... thanks

    Read the article

  • VMWare Player does not save changes in guest operating system

    - by Giorgos Manoltzas
    I have installed VMWare Player in my Windows 7 Ultimate and I have added Ubuntu 12.10 in a virtual machine. The problem is that every time I power off the virtual machine everything is lost. I mean every program I install or folder i save or anything else is lost completely. And when I start the again the virtual machine is like newly installed. Is there an option to change this (strange) behaviour?

    Read the article

  • Ubuntu: Resize the root LVM(2?) partition

    - by user12259
    I have an Ubuntu virtual machine running in VirtualBox 2.2.4, and I created it on an 8gb virtual disk which is too small. So, I am trying to increase the size of the disk. So far, I have done this: Created a new larger virtual disk Added the 2nd disk to the machine Used CloneZilla to clone the first disk onto the 2nd disk Removed the first disk Booted up off the 2nd (larger disk) But now I'm still stuck with an 8gb partition on my new 100gb virtual disk. Whats the easiest path from here to having a 100gb partition? :) I gather GPart can resize partitions, but it doesn't seem to support LVM2 partitions, which mine seems to be. thx Alex

    Read the article

  • fluent nhibernate one to many mapping

    - by Sammy
    I am trying to figure out what I thought was just a simple one to many mapping using fluent Nhibernate. I hoping someone can point me to the right directory to achieve this one to many relations I have an articles table and a categories table Many Articles can only belong to one Category Now my Categores table has 4 Categories and Articles has one article associated with cateory1 here is my setup. using FluentNHibernate.Mapping; using System.Collections; using System.Collections.Generic; namespace FluentMapping { public class Article { public virtual int Id { get; private set; } public virtual string Title { get; set; } public virtual Category Category{get;set;} } public class Category { public virtual int Id { get; private set; } public virtual string Description { get; set; } public virtual IList<Article> Articles { get; set; } public Category() { Articles=new List<Article>(); } public virtual void AddArticle(Article article) { article.Category = this; Articles.Add(article); } public virtual void RemoveArticle(Article article) { Articles.Remove(article); } } public class ArticleMap:ClassMap<Article> { public ArticleMap() { Table("Articles"); Id(x => x.Id).GeneratedBy.Identity(); Map(x => x.Title); References(x => x.Category).Column("CategoryId").LazyLoad(); } public class CategoryMap:ClassMap<Category> { public CategoryMap() { Table("Categories"); Id(x => x.Id).GeneratedBy.Identity(); Map(x => x.Description); HasMany(x => x.Articles).KeyColumn("CategoryId").Fetch.Join(); } } } } if I run this test [Fact] public void Can_Get_Categories() { using (var session = SessionManager.Instance.Current) { using (var transaction = session.BeginTransaction()) { var categories = session.CreateCriteria(typeof(Category)) //.CreateCriteria("Articles").Add(NHibernate.Criterion.Restrictions.EqProperty("Category", "Id")) .AddOrder(Order.Asc("Description")) .List<Category>(); } } } I am getting 7 Categories due to Left outer join used by Nhibernate any idea what I am doing wrong in here? Thanks [Solution] After a couple of hours reading nhibernate docs I here is what I came up with var criteria = session.CreateCriteria(typeof (Category)); criteria.AddOrder(Order.Asc("Description")); criteria.SetResultTransformer(new DistinctRootEntityResultTransformer()); var cats1 = criteria.List<Category>(); Using Nhibernate linq provider var linq = session.Linq<Category>(); linq.QueryOptions.RegisterCustomAction(c => c.SetResultTransformer(new DistinctRootEntityResultTransformer())); var cats2 = linq.ToList();

    Read the article

  • Fluent NHibermate and Polymorphism and a Newbie!

    - by Andy Baker
    I'm a fluent nhibernate newbie and I'm struggling mapping a hierarchy of polymorhophic objects. I've produced the following Model that recreates the essence of what I'm doing in my real application. I have a ProductList and several specialised type of products; public class MyProductList { public virtual int Id { get; set; } public virtual string Name {get;set;} public virtual IList<Product> Products { get; set; } public MyProductList() { Products = new List<Product>(); } } public class Product { public virtual int Id { get; set; } public virtual string ProductDescription {get;set;} } public class SizedProduct : Product { public virtual decimal Size {get;set;} } public class BundleProduct : Product { public virtual Product BundleItem1 {get;set;} public virtual Product BundleItem2 {get;set;} } Note that I have a specialised type of Product called BundleProduct that has two products attached. I can add any of the specialised types of product to MyProductList and a bundle Product can be made up of any of the specialised types of product too. Here is the fluent nhibernate mapping that I'm using; public class MyListMap : ClassMap<MyList> { public MyListMap() { Id(ml => ml.Id); Map(ml => ml.Name); HasManyToMany(ml => ml.Products).Cascade.All(); } } public class ProductMap : ClassMap<Product> { public ProductMap() { Id(prod => prod.Id); Map(prod => prod.ProductDescription); } } public class SizedProductMap : SubclassMap<SizedProduct> { public SizedProductMap() { Map(sp => sp.Size); } } public class BundleProductMap : SubclassMap<BundleProduct> { public BundleProductMap() { References(bp => bp.BundleItem1).Cascade.All(); References(bp => bp.BundleItem2).Cascade.All(); } } I haven't configured have any reverse mappings, so a product doesn't know which Lists it belongs to or which bundles it is part of. Next I add some products to my list; MyList ml = new MyList() { Name = "Example" }; ml.Products.Add(new Product() { ProductDescription = "PSU" }); ml.Products.Add(new SizedProduct() { ProductDescription = "Extension Cable", Size = 2.0M }); ml.Products.Add(new BundleProduct() { ProductDescription = "Fan & Cable", BundleItem1 = new Product() { ProductDescription = "Fan Power Cable" }, BundleItem2 = new SizedProduct() { ProductDescription = "80mm Fan", Size = 80M } }); When I persist my list to the database and reload it, the list itself contains the items I expect ie MyList[0] has a type of Product, MyList[1] has a type of SizedProduct, and MyList[2] has a type of BundleProduct - great! If I navigate to the BundleProduct, I'm not able to see the types of Product attached to the BundleItem1 or BundleItem2 instead they are always proxies to the Product - in this example BundleItem2 should be a SizedProduct. Is there anything I can do to resove this either in my model or the mapping? Thanks in advance for your help.

    Read the article

  • Saving child collections with NHibernate

    - by Ben
    Hi, I am in the process or learning NHibernate so bare with me. I have an Order class and a Transaction class. Order has a one to many association with transaction. The transaction table in my database has a not null constraint on the OrderId foreign key. Order class: public class Order { public virtual Guid Id { get; set; } public virtual DateTime CreatedOn { get; set; } public virtual decimal Total { get; set; } public virtual ICollection<Transaction> Transactions { get; set; } public Order() { Transactions = new HashSet<Transaction>(); } } Order Mapping: <class name="Order" table="Orders"> <cache usage="read-write"/> <id name="Id"> <generator class="guid"/> </id> <property name="CreatedOn" type="datetime"/> <property name="Total" type="decimal"/> <set name="Transactions" table="Transactions" lazy="false" inverse="true"> <key column="OrderId"/> <one-to-many class="Transaction"/> </set> Transaction Class: public class Transaction { public virtual Guid Id { get; set; } public virtual DateTime ExecutedOn { get; set; } public virtual bool Success { get; set; } public virtual Order Order { get; set; } } Transaction Mapping: <class name="Transaction" table="Transactions"> <cache usage="read-write"/> <id name="Id" column="Id" type="Guid"> <generator class="guid"/> </id> <property name="ExecutedOn" type="datetime"/> <property name="Success" type="bool"/> <many-to-one name="Order" class="Order" column="OrderId" not-null="true"/> Really I don't want a bidirectional association. There is no need for my transaction objects to reference their order object directly (I just need to access the transactions of an order). However, I had to add this so that Order.Transactions is persisted to the database: Repository: public void Update(Order entity) { using (ISession session = NHibernateHelper.OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { session.Update(entity); foreach (var tx in entity.Transactions) { tx.Order = entity; session.SaveOrUpdate(tx); } transaction.Commit(); } } } My problem is that this will then issue an update for every transaction on the order collection (regardless of whether it has changed or not). What I was trying to get around was having to explicitly save the transaction before saving the order and instead just add the transactions to the order and then save the order: public void Can_add_transaction_to_existing_order() { var orderRepo = new OrderRepository(); var order = orderRepo.GetById(new Guid("aa3b5d04-c5c8-4ad9-9b3e-9ce73e488a9f")); Transaction tx = new Transaction(); tx.ExecutedOn = DateTime.Now; tx.Success = true; order.Transactions.Add(tx); orderRepo.Update(order); } Although I have found quite a few articles covering the set up of a one-to-many association, most of these discuss retrieving of data and not persisting back. Many thanks, Ben

    Read the article

  • S#harp architecture mapping many to many and ado.net data services: A single resource was expected f

    - by Leg10n
    Hi, I'm developing an application that reads data from a SQL server database (migrated from a legacy DB) with nHibernate and s#arp architecture through ADO.NET Data services. I'm trying to map a many-to-many relationship. I have a Error class: public class Error { public virtual int ERROR_ID { get; set; } public virtual string ERROR_CODE { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<ErrorGroup> GROUPS { get; protected set; } } And then I have the error group class: public class ErrorGroup { public virtual int ERROR_GROUP_ID {get; set;} public virtual string ERROR_GROUP_NAME { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<Error> ERRORS { get; protected set; } } And the overrides: public class ErrorGroupOverride : IAutoMappingOverride<ErrorGroup> { public void Override(AutoMapping<ErrorGroup> mapping) { mapping.Table("ERROR_GROUP"); mapping.Id(x => x.ERROR_GROUP_ID, "ERROR_GROUP_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<Error>(x => x.Error) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_GROUP_ID") .ChildKeyColumn("ERROR_ID").Inverse().AsBag(); } } public class ErrorOverride : IAutoMappingOverride<Error> { public void Override(AutoMapping<Error> mapping) { mapping.Table("ERROR"); mapping.Id(x => x.ERROR_ID, "ERROR_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<ErrorGroup>(x => x.GROUPS) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_ID") .ChildKeyColumn("ERROR_GROUP_ID").AsBag(); } } When I view the Data service in the browser like: http://localhost:1905/DataService.svc/Errors it shows the list of errors with no problems, and using it like http://localhost:1905/DataService.svc/Errors(123) works too. The Problem When I want to see the Errors in a group or the groups form an error, like: "http://localhost:1905/DataService.svc/Errors(123)?$expand=GROUPS" I get the XML Document, but the browser says: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. -------------------------------------------------------------------------------- Only one top level element is allowed in an XML document. Error processing resource 'http://localhost:1905/DataServic... <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> -^ I view the sourcecode, and I get the data. However it comes with an exception: <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <code></code> <message xml:lang="en-US">An error occurred while processing this request.</message> <innererror xmlns="xmlns"> <message>A single resource was expected for the result, but multiple resources were found.</message> <type>System.InvalidOperationException</type> <stacktrace> at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved)&#xD; at System.Data.Services.ResponseBodyWriter.Write(Stream stream)</stacktrace> </innererror> </error> A I missing something??? Where does this error come from?

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >