Search Results

Search found 51790 results on 2072 pages for 'long running'.

Page 208/2072 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Why do ICMP Redirect Host happen?

    - by El Barto
    I'm setting up a Debian box as a router for 4 subnets. For that I have defined 4 virtual interfaces on the NIC where the LAN is connected (eth1). eth1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:d98/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6026521 errors:0 dropped:0 overruns:0 frame:0 TX packets:35331299 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:673201397 (642.0 MiB) TX bytes:177276932 (169.0 MiB) Interrupt:19 Base address:0x6000 eth1:0 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.2.1 Bcast:10.1.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth1:1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.3.1 Bcast:10.1.3.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth1:2 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.4.1 Bcast:10.1.4.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth2 Link encap:Ethernet HWaddr 6c:f0:49:a4:47:38 inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:fea4:4738/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:199809345 errors:0 dropped:0 overruns:0 frame:0 TX packets:158362936 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:3656983762 (3.4 GiB) TX bytes:1715848473 (1.5 GiB) Interrupt:27 eth3 Link encap:Ethernet HWaddr 94:0c:6d:82:c8:72 inet addr:192.168.2.5 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:c872/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:110814 errors:0 dropped:0 overruns:0 frame:0 TX packets:73386 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:16044901 (15.3 MiB) TX bytes:42125647 (40.1 MiB) Interrupt:20 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:22351 errors:0 dropped:0 overruns:0 frame:0 TX packets:22351 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2625143 (2.5 MiB) TX bytes:2625143 (2.5 MiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:41358924 errors:0 dropped:0 overruns:0 frame:0 TX packets:23116350 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3065505744 (2.8 GiB) TX bytes:1324358330 (1.2 GiB) I have two other computers connected to this network. One has IP 10.1.1.12 (subnet mask 255.255.255.0) and the other one 10.1.2.20 (subnet mask 255.255.255.0). I want to be able to reach 10.1.1.12 from 10.1.2.20. Since packet forwarding is enabled in the router and the policy of the FORWARD chain is ACCEPT (and there are no other rules), I understand that there should be no problem to ping from 10.1.2.20 to 10.1.1.12 going through the router. However, this is what I get: $ ping -c15 10.1.1.12 PING 10.1.1.12 (10.1.1.12): 56 data bytes Request timeout for icmp_seq 0 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 81d4 0 0000 3f 01 e2b3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 1 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 899b 0 0000 3f 01 daec 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 2 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 78fe 0 0000 3f 01 eb89 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 3 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 14b8 0 0000 3f 01 4fd0 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 4 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 8ef7 0 0000 3f 01 d590 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 5 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 ec9d 0 0000 3f 01 77ea 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 6 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 70e6 0 0000 3f 01 f3a1 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 7 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 b0d2 0 0000 3f 01 b3b5 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 8 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 f8b4 0 0000 3f 01 6bd3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 9 Request timeout for icmp_seq 10 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 1c95 0 0000 3f 01 47f3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 11 Request timeout for icmp_seq 12 Request timeout for icmp_seq 13 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 62bc 0 0000 3f 01 01cc 10.1.2.20 10.1.1.12 Why does this happen? From what I've read the Redirect Host response has something to do with the fact that the two hosts are in the same network and there being a shorter route (or so I understood). They are in fact in the same physical network, but why would there be a better route if they are not on the same subnet (they can't see each other)? What am I missing? Some extra info you might want to see: # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth2 10.1.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth2 0.0.0.0 192.168.2.1 0.0.0.0 UG 100 0 0 eth3 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- !10.0.0.0/8 10.0.0.0/8 MASQUERADE all -- 10.0.0.0/8 !10.0.0.0/8 Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Cassandra Production ready on Windows?

    - by BlackTea
    Question anyone know of any success stories of Cassandra running on windows in a production environment? I'm doing some work on Cassandra and trying to find the correct platform for it currently the platform is windows running MS-SQLas the data store. what are the dis-advantages if any when running Cassandra on a windows environment.

    Read the article

  • Linq 2 SQL using base class and WCF

    - by Gena Verdel
    Hi all. I have the following problem: I'm using L2S for generating entity classes. All these classes share the same property ID which is autonumber. So I figured to put this property to base class and extend all entity classes from the base one. In order to be able to read the value I'm using the override modifier on this property in each and every entity class. Up to now it's live and kicking. Then I decided to introduce another tier - services using WCF approach. I've modified the Serialization mode to Unidirectional (and added the IsReference=true attribute to enable two directions), also added [DataContract] attribute to the BaseObject class. WCF is able to transport the whole object but one property , which is ID. Applying [DataMember] attribute on ID property at the base class resulted in nothing. Am I missing something? Is what I'm trying to achieve possible at all? [DataContract()] abstract public class BaseObject : IIccObject public virtual long ID { get; set; } [Table(Name="dbo.Blocks")] [DataContract(IsReference=true)] public partial class Block : INotifyPropertyChanging, INotifyPropertyChanged { private static PropertyChangingEventArgs emptyChangingEventArgs = new PropertyChangingEventArgs(String.Empty); private long _ID; private int _StatusID; private string _Name; private bool _IsWithControlPoints; private long _DivisionID; private string _SHAPE; private EntitySet<BlockByWorkstation> _BlockByWorkstations; private EntitySet<PlanningPointAppropriation> _PlanningPointAppropriations; private EntitySet<Neighbor> _Neighbors; private EntitySet<Neighbor> _Neighbors1; private EntitySet<Task> _Tasks; private EntitySet<PlanningPointByBlock> _PlanningPointByBlocks; private EntityRef<Division> _Division; private bool serializing; #region Extensibility Method Definitions partial void OnLoaded(); partial void OnValidate(System.Data.Linq.ChangeAction action); partial void OnCreated(); partial void OnIDChanging(long value); partial void OnIDChanged(); partial void OnStatusIDChanging(int value); partial void OnStatusIDChanged(); partial void OnNameChanging(string value); partial void OnNameChanged(); partial void OnIsWithControlPointsChanging(bool value); partial void OnIsWithControlPointsChanged(); partial void OnDivisionIDChanging(long value); partial void OnDivisionIDChanged(); partial void OnSHAPEChanging(string value); partial void OnSHAPEChanged(); #endregion public Block() { this.Initialize(); } [Column(Storage="_ID", AutoSync=AutoSync.OnInsert, DbType="BigInt NOT NULL IDENTITY", IsPrimaryKey=true, IsDbGenerated=true)] [DataMember(Order=1)] public override long ID { get { return this._ID; } set { if ((this._ID != value)) { this.OnIDChanging(value); this.SendPropertyChanging(); this._ID = value; this.SendPropertyChanged("ID"); this.OnIDChanged(); } } } [Column(Storage="_StatusID", DbType="Int NOT NULL")] [DataMember(Order=2)] public int StatusID { get { return this._StatusID; } set { if ((this._StatusID != value)) { this.OnStatusIDChanging(value); this.SendPropertyChanging(); this._StatusID = value; this.SendPropertyChanged("StatusID"); this.OnStatusIDChanged(); } } } [Column(Storage="_Name", DbType="NVarChar(255)")] [DataMember(Order=3)] public string Name { get { return this._Name; } set { if ((this._Name != value)) { this.OnNameChanging(value); this.SendPropertyChanging(); this._Name = value; this.SendPropertyChanged("Name"); this.OnNameChanged(); } } } [Column(Storage="_IsWithControlPoints", DbType="Bit NOT NULL")] [DataMember(Order=4)] public bool IsWithControlPoints { get { return this._IsWithControlPoints; } set { if ((this._IsWithControlPoints != value)) { this.OnIsWithControlPointsChanging(value); this.SendPropertyChanging(); this._IsWithControlPoints = value; this.SendPropertyChanged("IsWithControlPoints"); this.OnIsWithControlPointsChanged(); } } } [Column(Storage="_DivisionID", DbType="BigInt NOT NULL")] [DataMember(Order=5)] public long DivisionID { get { return this._DivisionID; } set { if ((this._DivisionID != value)) { if (this._Division.HasLoadedOrAssignedValue) { throw new System.Data.Linq.ForeignKeyReferenceAlreadyHasValueException(); } this.OnDivisionIDChanging(value); this.SendPropertyChanging(); this._DivisionID = value; this.SendPropertyChanged("DivisionID"); this.OnDivisionIDChanged(); } } } [Column(Storage="_SHAPE", DbType="Text", UpdateCheck=UpdateCheck.Never)] [DataMember(Order=6)] public string SHAPE { get { return this._SHAPE; } set { if ((this._SHAPE != value)) { this.OnSHAPEChanging(value); this.SendPropertyChanging(); this._SHAPE = value; this.SendPropertyChanged("SHAPE"); this.OnSHAPEChanged(); } } } [Association(Name="Block_BlockByWorkstation", Storage="_BlockByWorkstations", ThisKey="ID", OtherKey="BlockID")] [DataMember(Order=7, EmitDefaultValue=false)] public EntitySet<BlockByWorkstation> BlockByWorkstations { get { if ((this.serializing && (this._BlockByWorkstations.HasLoadedOrAssignedValues == false))) { return null; } return this._BlockByWorkstations; } set { this._BlockByWorkstations.Assign(value); } } [Association(Name="Block_PlanningPointAppropriation", Storage="_PlanningPointAppropriations", ThisKey="ID", OtherKey="MasterBlockID")] [DataMember(Order=8, EmitDefaultValue=false)] public EntitySet<PlanningPointAppropriation> PlanningPointAppropriations { get { if ((this.serializing && (this._PlanningPointAppropriations.HasLoadedOrAssignedValues == false))) { return null; } return this._PlanningPointAppropriations; } set { this._PlanningPointAppropriations.Assign(value); } } [Association(Name="Block_Neighbor", Storage="_Neighbors", ThisKey="ID", OtherKey="FirstBlockID")] [DataMember(Order=9, EmitDefaultValue=false)] public EntitySet<Neighbor> Neighbors { get { if ((this.serializing && (this._Neighbors.HasLoadedOrAssignedValues == false))) { return null; } return this._Neighbors; } set { this._Neighbors.Assign(value); } } [Association(Name="Block_Neighbor1", Storage="_Neighbors1", ThisKey="ID", OtherKey="SecondBlockID")] [DataMember(Order=10, EmitDefaultValue=false)] public EntitySet<Neighbor> Neighbors1 { get { if ((this.serializing && (this._Neighbors1.HasLoadedOrAssignedValues == false))) { return null; } return this._Neighbors1; } set { this._Neighbors1.Assign(value); } } [Association(Name="Block_Task", Storage="_Tasks", ThisKey="ID", OtherKey="BlockID")] [DataMember(Order=11, EmitDefaultValue=false)] public EntitySet<Task> Tasks { get { if ((this.serializing && (this._Tasks.HasLoadedOrAssignedValues == false))) { return null; } return this._Tasks; } set { this._Tasks.Assign(value); } } [Association(Name="Block_PlanningPointByBlock", Storage="_PlanningPointByBlocks", ThisKey="ID", OtherKey="BlockID")] [DataMember(Order=12, EmitDefaultValue=false)] public EntitySet<PlanningPointByBlock> PlanningPointByBlocks { get { if ((this.serializing && (this._PlanningPointByBlocks.HasLoadedOrAssignedValues == false))) { return null; } return this._PlanningPointByBlocks; } set { this._PlanningPointByBlocks.Assign(value); } } [Association(Name="Division_Block", Storage="_Division", ThisKey="DivisionID", OtherKey="ID", IsForeignKey=true, DeleteOnNull=true, DeleteRule="CASCADE")] public Division Division { get { return this._Division.Entity; } set { Division previousValue = this._Division.Entity; if (((previousValue != value) || (this._Division.HasLoadedOrAssignedValue == false))) { this.SendPropertyChanging(); if ((previousValue != null)) { this._Division.Entity = null; previousValue.Blocks.Remove(this); } this._Division.Entity = value; if ((value != null)) { value.Blocks.Add(this); this._DivisionID = value.ID; } else { this._DivisionID = default(long); } this.SendPropertyChanged("Division"); } } } public event PropertyChangingEventHandler PropertyChanging; public event PropertyChangedEventHandler PropertyChanged; protected virtual void SendPropertyChanging() { if ((this.PropertyChanging != null)) { this.PropertyChanging(this, emptyChangingEventArgs); } } protected virtual void SendPropertyChanged(String propertyName) { if ((this.PropertyChanged != null)) { this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } private void attach_BlockByWorkstations(BlockByWorkstation entity) { this.SendPropertyChanging(); entity.Block = this; } private void detach_BlockByWorkstations(BlockByWorkstation entity) { this.SendPropertyChanging(); entity.Block = null; } private void attach_PlanningPointAppropriations(PlanningPointAppropriation entity) { this.SendPropertyChanging(); entity.Block = this; } private void detach_PlanningPointAppropriations(PlanningPointAppropriation entity) { this.SendPropertyChanging(); entity.Block = null; } private void attach_Neighbors(Neighbor entity) { this.SendPropertyChanging(); entity.FirstBlock = this; } private void detach_Neighbors(Neighbor entity) { this.SendPropertyChanging(); entity.FirstBlock = null; } private void attach_Neighbors1(Neighbor entity) { this.SendPropertyChanging(); entity.SecondBlock = this; } private void detach_Neighbors1(Neighbor entity) { this.SendPropertyChanging(); entity.SecondBlock = null; } private void attach_Tasks(Task entity) { this.SendPropertyChanging(); entity.Block = this; } private void detach_Tasks(Task entity) { this.SendPropertyChanging(); entity.Block = null; } private void attach_PlanningPointByBlocks(PlanningPointByBlock entity) { this.SendPropertyChanging(); entity.Block = this; } private void detach_PlanningPointByBlocks(PlanningPointByBlock entity) { this.SendPropertyChanging(); entity.Block = null; } private void Initialize() { this._BlockByWorkstations = new EntitySet<BlockByWorkstation>(new Action<BlockByWorkstation>(this.attach_BlockByWorkstations), new Action<BlockByWorkstation>(this.detach_BlockByWorkstations)); this._PlanningPointAppropriations = new EntitySet<PlanningPointAppropriation>(new Action<PlanningPointAppropriation>(this.attach_PlanningPointAppropriations), new Action<PlanningPointAppropriation>(this.detach_PlanningPointAppropriations)); this._Neighbors = new EntitySet<Neighbor>(new Action<Neighbor>(this.attach_Neighbors), new Action<Neighbor>(this.detach_Neighbors)); this._Neighbors1 = new EntitySet<Neighbor>(new Action<Neighbor>(this.attach_Neighbors1), new Action<Neighbor>(this.detach_Neighbors1)); this._Tasks = new EntitySet<Task>(new Action<Task>(this.attach_Tasks), new Action<Task>(this.detach_Tasks)); this._PlanningPointByBlocks = new EntitySet<PlanningPointByBlock>(new Action<PlanningPointByBlock>(this.attach_PlanningPointByBlocks), new Action<PlanningPointByBlock>(this.detach_PlanningPointByBlocks)); this._Division = default(EntityRef<Division>); OnCreated(); } [OnDeserializing()] [System.ComponentModel.EditorBrowsableAttribute(EditorBrowsableState.Never)] public void OnDeserializing(StreamingContext context) { this.Initialize(); } [OnSerializing()] [System.ComponentModel.EditorBrowsableAttribute(EditorBrowsableState.Never)] public void OnSerializing(StreamingContext context) { this.serializing = true; } [OnSerialized()] [System.ComponentModel.EditorBrowsableAttribute(EditorBrowsableState.Never)] public void OnSerialized(StreamingContext context) { this.serializing = false; } }

    Read the article

  • PHP, Ajax, and the lifespan of the request

    - by Dave
    I was wondering about the lifespan of a PHP script when called via Ajax. Assume that there is a long-running (i.e. 30 seconds) PHP script on a server and that page is loaded via Ajax. Before the script completes, the user closes the browser. Does the script continue running to completion, is it terminated, or is this a function of the server itself (I'm running Apache fwiw). Thanks for any help.

    Read the article

  • Update Azure Service Configuration File using Powershell

    - by David Osborn
    I'm trying to write a powershell script that updats each of the DiagnosticsConnectionString and DataConnectionString values below, but I can't seem to find each individual Role node using $serviceconfig.ServiceConfiguration.SelectSingleNode("Role[@name='MyService_WorkerRole']") doing echo $serviceconfig.ServiceConfiguration.Role lists out both Role nodes for me so I know it is working up to that point, but after that I am not having much success. where $serviceConfig contains the below XML: <?xml version="1.0"?> <ServiceConfiguration serviceName="MyService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="MyService_WorkerRole"> <Instances count="1" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="really long string" /> <Setting name="DataConnectionString" value="really long string 2" /> </ConfigurationSettings> </Role> <Role name="MyService_WebRole"> <Instances count="1" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="really long string 3" /> <Setting name="DataConnectionString" value="really long string 4" /> </ConfigurationSettings> </Role> </ServiceConfiguration>

    Read the article

  • where did the _syscallN macros go in <linux/unistd.h>?

    - by Evan Teran
    It used to be the case that if you needed to make a system call directly in linux without the use of an existing library, you could just include <linux/unistd.h> and it would define a macro similar to this: #define _syscall3(type,name,type1,arg1,type2,arg2,type3,arg3) \ type name(type1 arg1,type2 arg2,type3 arg3) \ { \ long __res; \ __asm__ volatile ("int $0x80" \ : "=a" (__res) \ : "0" (__NR_##name),"b" ((long)(arg1)),"c" ((long)(arg2)), \ "d" ((long)(arg3))); \ if (__res>=0) \ return (type) __res; \ errno=-__res; \ return -1; \ } Then you could just put somewhere in your code: _syscall3(ssize_t, write, int, fd, const void *, buf, size_t, count); which would define a write function for you that properly performed the system call. It seems that this system has been superseded by something (i am guessing that "[vsyscall]" page that every process gets) more robust. So what is the proper way (please be specific) for a program to perform a system call directly on newer linux kernels? I realize that I should be using libc and let it do the work for me. But let's assume that I have a decent reason for wanting to know how to do this :-).

    Read the article

  • Going "behind Hibernate's back" to update foreign key values without an associated entity

    - by Alex Cruise
    Updated: I wound up "solving" the problem by doing the opposite! I now have the entity reference field set as read-only (insertable=false updatable=false), and the foreign key field read-write. This means I need to take special care when saving new entities, but on querying, the entity properties get resolved for me. I have a bidirectional one-to-many association in my domain model, where I'm using JPA annotations and Hibernate as the persistence provider. It's pretty much your bog-standard parent/child configuration, with one difference being that I want to expose the parent's foreign key as a separate property of the child alongside the reference to a parent instance, like so: @Entity public class Child { @Id @GeneratedValue Long id; @Column(name="parent_id", insertable=false, updatable=false) private Long parentId; @ManyToOne(cascade=CascadeType.ALL) @JoinColumn(name="parent_id") private Parent parent; private long timestamp; } @Entity public class Parent { @Id @GeneratedValue Long id; @OrderBy("timestamp") @OneToMany(mappedBy="parent", cascade=CascadeType.ALL, fetch=FetchType.LAZY) private List<Child> children; } This works just fine most of the time, but there are many (legacy) cases when I'd like to put an invalid value in the parent_id column without having to create a bogus Parent first. Unfortunately, Hibernate won't save values assigned to the parentId field due to insertable=false, updatable=false, which it requires when the same column is mapped to multiple properties. Is there any nice way to "go behind Hibernate's back" and sneak values into that field without having to drop down to JDBC or implement an interceptor? Thanks!

    Read the article

  • Conversion from string to type 'Double' is not valid.

    - by Adnan Badar
    Hi, I am getting the following error while running a select statement (using OleDbCommand). My query is SELECT CME FROM Personnel WHERE CME = '11349D' If objOleDbCom.ExecuteScalar() 0 Then When i execute the above statement i got this error Conversion from string "11349D" to type 'Double' is not valid. My field CME data type is Text My database is Access 2007 I tried by running my query directly inside database and it is running fine. Please suggest. Thanks.

    Read the article

  • What exactly is the GNU tar ././@LongLink "trick"?

    - by Cheeso
    I read that a tar entry type of 'L' (76) is used by gnu tar and gnu-compliant tar utilities to indicate that the next entry in the archive has a "long" name. In this case the header block with the entry type of 'L' usually encodes the name ././@LongLink . My question is: where is the format of the next block described? The format of a tar archive is very simple: it is just a series of 512-byte blocks. In the normal case, each file in a tar archive is represented as a series of blocks. The first block is a header block, containing the file name, entry type, modified time, and other metadata. Then the raw file data follows, using as many 512-byte blocks as required. Then the next entry. If the filename is longer than will fit in the space allocated in the header block, gnu tar apparently uses what's known as "the ././@LongLink trick". I can't find a precise description for it. When the entry type is 'L', how do I know how long the "long" filename is? Is the long name limited to 512 bytes, in other words, whatever fits in one block? Most importantly: where is this documented?

    Read the article

  • Indenting Paragraph With cout

    - by Eric
    Given a string of unknown length, how can you output it using cout so that the entire string displays as an indented block of text on the console? (so that even if the string wraps to a new line, the second line would have the same level of indentation) Example: cout << "This is a short string that isn't indented." << endl; cout << /* Indenting Magic */ << "This is a very long string that will wrap to the next line because it is a very long string that will wrap to the next line..." << endl; And the desired output: This is a short string that isn't indented. This is a very long string that will wrap to the next line because it is a very long string that will wrap to the next line... Edit: The homework assignment I'm working on is complete. The assignment has nothing to do with getting the output to format as in the above example, so I probably shouldn't have included the homework tag. This is just for my own enlightment. I know I could count through the characters in the string, see when I get to the end of a line, then spit out a newline and output -x- number of spaces each time. I'm interested to know if there is a simpler, idiomatic C++ way to accomplish the above.

    Read the article

  • Measuring time spent in application / thread

    - by Adamski
    I am writing a simulation in Java whereby objects act under Newtonian physics. An object may have a force applied to it and the resulting velocity causes it to move across the screen. The nature of the simulation means that objects move in discrete steps depending on the time ellapsed between the current and previous iteration of the animation loop; e.g public void animationLoop() { long prev = System.currentTimeMillis(); long now; while(true) { long now = System.currentTimeMillis(); long deltaMillis = now - prev; prev = now; if (deltaMillis > 0) { // Some time has passed for (Mass m : masses) { m.updatePosition(deltaMillis); } // Do all repaints. } } } A problem arises if the animation thread is delayed in some way causing a large amount of time to ellapse (the classic case being under Windows whereby clicking and holding on minimise / maximise prevents a repaint), which causes objects to move at an alarming rate. My question: Is there a way to determine the time spent in the animation thread rather than the wallclock time, or can anyone suggest a workaround to avoid this problem? My only thought so far is to contstrain deltaMillis by some upper bound.

    Read the article

  • ClassCastException in iterating list returned by Query using Hibernate Query Language

    - by Tushar Paliwal
    I'm beginner in hibernate.I'm trying a simplest example using HQL but it generates exception at line 25 ClassCastException when i try to iterate list.When i try to cast the object returned by next() methode of iterator it generates the same problem.I could not identify the problem.Kindly give me solution of the problem. Employee.java package one; import javax.persistence.Entity; import javax.persistence.Id; @Entity public class Employee { @Id private Long id; private String name; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Employee(Long id, String name) { super(); this.id = id; this.name = name; } public Employee() { } } Main2.java package one; import java.util.Iterator; import java.util.List; import org.hibernate.Query; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.cfg.Configuration; public class Main2 { public static void main(String[] args) { SessionFactory sf=new Configuration().configure().buildSessionFactory(); Session s1=sf.openSession(); Query q=s1.createQuery("from Employee "); Transaction tx=s1.beginTransaction(); List l=q.list(); Iterator itr=l.iterator(); while(itr.hasNext()) { Object obj[]=(Object[])itr.next();//Line 25 for(Object temp:obj) { System.out.println(temp); } } tx.commit(); s1.close(); sf.close(); } }

    Read the article

  • How to get number of rows deleted from mysql in shell script

    - by simonlord
    Hi all I can't work out how to get the mysql client to return the number of rows deleted to the shell when running a delete. Does anyone know what option will enable this? Or ways around it? Here's what i'm trying, but i get no output: #!/bin/bash deleted=`mysql mydb -e "delete from mytable where insertedtime < '2010-04-01 00:00:00'"|tail -n 1` I was expecting something like this as the output from mysql: deleted 999999 Which is why i have the tail -n 1 so i only pick up the count and not the column name. When running the command by hand (mysql mydb -e "delete from mytable where insertedtime < '2010-04-01 00:00:00'") there is no output. When running the command interactively when running the mysql client i ge the following: mysql>delete from mytable where insertedtime < '2010-04-01 00:00:00'; Query OK, 0 rows affected (0.00 sec) I want to get the rows affected count into my shell variable. Any help would be most appreciated.

    Read the article

  • Current tasks count at Appengine Task Queue

    - by splix
    Are there any ways to get count of current unfinished tasks at Google Appengine development server? I need it for making my integration test. I found a way to get this when running it local (just in mem), as described at appengine docs. But when i'm running it as a standalone server, from maven, this doesn't work. Just because library appengine-testing conflicts with Appengine SDK classes, and i can't use those classes together, when running sdk dev server.

    Read the article

  • User defined literal arguments are not constexpr?

    - by Pubby
    I'm testing out user defined literals. I want to make _fac return the factorial of the number. Having it call a constexpr function works, however it doesn't let me do it with templates as the compiler complains that the arguments are not and cannot be constexpr. I'm confused by this - aren't literals constant expressions? The 5 in 5_fac is always a literal that can be evaluated during compile time, so why can't I use it as such? First method: constexpr int factorial_function(int x) { return (x > 0) ? x * factorial_function(x - 1) : 1; } constexpr int operator "" _fac(unsigned long long x) { return factorial_function(x); // this works } Second method: template <int N> struct factorial { static const unsigned int value = N * factorial<N - 1>::value; }; template <> struct factorial<0> { static const unsigned int value = 1; }; constexpr int operator "" _fac(unsigned long long x) { return factorial_template<x>::value; // doesn't work - x is not a constexpr }

    Read the article

  • Windows Workflow runs very slowlyh on my DEV machine

    - by Joon
    I am developing an app using WF hosted in IIS as WCF services as a business layer. This runs quickly on any machine running Windows Server 2008 R2, but very slowly on our dev machines, running Windows XP SP3. Yesterday, the workflows were as fast on my dev machine as they are on the server for the whole day. Today, they are back to running slowly again (I rebooted overnight) Has anyone else experienced this problem with workflows running slowly on IIS in XP? What did you do to fix it?

    Read the article

  • Error occurs while validating form input using jQuery in Firebug

    - by Param-Ganak
    I have written a custom validation code in jQuery, which is working fine. I have a login form which has two fields, i.e. userid and password. I have written a custom code for client side validation for these fields. This code is working fine and gives me proper error messages as per the situation. But the problem with this code is that when I enter the invalid data in any or both field and press submit button of form then it displays the proper error message but at the same time when I checked it in Firebug it displays following error message when submit button of the form is clicked validate is not defined function onclick(event) { javascript: return validate(); } (click clientX=473, clientY=273) Here is the JQUERY validation code $(document).ready(function (){ $("#id_login_form").validate({ rules: { userid: { required: true, minlength: 6, maxlength: 20, // basic: true }, password: { required: true, minlength: 6, maxlength: 15, // basic: true } }, messages: { userid: { required: " Please enter the username.", minlength: "User Name should be minimum 6 characters long.", maxlength: "User Name should be maximum 15 characters long.", // basic: "working here" }, password: { required: " Please enter the password.", minlength: "Password should be minimum 6 characters long.", maxlength: "Password should be maximum 15 characters long.", // basic: "working here too.", } }, errorClass: "errortext", errorLabelContainer: "#messagebox" } }); }); /* $.validator.addMethod('username_alphanum', function (value) { return /^(?![0-9]+$)[a-zA-Z 0-9_.]+$/.test(value); }, 'User name should be alphabetic or Alphanumeric and may contain . and _.'); $.validator.addMethod('alphanum', function (value) { return /^(?![a-zA-Z]+$)(?![0-9]+$)[a-zA-Z 0-9]+$/.test(value); }, 'Password should be Alphanumeric.'); $.validator.addMethod('basic', function (value) { return /^[a-zA-Z 0-9_.]+$/.test(value); }, 'working working working'); */ So please tell me where is I am wrong in my jQuery code. Thank You!

    Read the article

  • How can I split a LINESTRING into two LINESTRINGs at a given point?

    - by sabbour
    Hello, I'm trying to write a function that will split a LINESTRING into two LINESTRINGs given the split point. What I'm trying to achieve is a function that given a LINESTRING and a distance, it will return N LINESTRINGS for the original linestring splitted at multiples of that distance. This is what I have so far (I'm using SQL Server Spatial Tools from CodePlex): DECLARE @testLine geography; DECLARE @slicePoint geography; SET @testLine = geography::STGeomFromText('LINESTRING (34.5157942 28.5039665, 34.5157079 28.504725, 34.5156881 28.5049565, 34.5156773 28.505082, 34.5155642 28.5054437, 34.5155498 28.5054899, 34.5154937 28.5058826, 34.5154643 28.5060218, 34.5153968 28.5063415, 34.5153322 28.5065338, 34.5152031 28.5069178, 34.5150603 28.5072288, 34.5148716 28.5075501, 34.5146106 28.5079974, 34.5143617 28.5083813, 34.5141373 28.5086414, 34.5139954 28.5088441, 34.5138874 28.5089983, 34.5138311 28.5091054, 34.5136783 28.5093961, 34.5134336 28.5097531, 34.51325 28.5100794, 34.5130256 28.5105078, 34.5128754 28.5107957, 34.5126258 28.5113222, 34.5123984 28.5117673)', 4326) DECLARE @pointOne geography; declare @result table (segment geography) DECLARE @sliceDistance float DECLARE @nextSliceAt float SET @sliceDistance = 100 -- slice every 100 meters SET @nextSliceAt = @sliceDistance SELECT @pointOne = @testLine.STStartPoint() WHILE(@nextSliceAt < @testLine.STLength()) BEGIN SELECT @slicePoint = dbo.LocateAlongGeog(@testLine,@nextSliceAt) DECLARE @subLineString geography; SET @subLineString = geography::STGeomFromText('LINESTRING (' + dbo.FloatToVarchar(@pointOne.Long) + ' ' + dbo.FloatToVarchar(@pointOne.Lat) + ',' + dbo.FloatToVarchar(@slicePoint.Long) + ' ' + dbo.FloatToVarchar(@slicePoint.Lat) +')', 4326) insert into @result SELECT @subLineString SET @pointOne = @slicePoint set @nextSliceAt = @nextSliceAt + @sliceDistance END SET @subLineString = geography::STGeomFromText('LINESTRING (' + dbo.FloatToVarchar(@pointOne.Long) + ' ' + dbo.FloatToVarchar(@pointOne.Lat) + ',' + dbo.FloatToVarchar(@testLine.STEndPoint().Long) + ' ' + dbo.FloatToVarchar(@testLine.STEndPoint().Lat) +')', 4326) insert into @result SELECT @subLineString select * from @result I know it is not the best looking code, but there is another problem. The above code approximates the resulting LINESTRING because it does not follow the curvature of the original LINESTRING as it only takes into consideration the start and end points when creating the new segment. Is there a way take a substring out of the original LINESTRING given the start and end points?

    Read the article

  • AppFabric caching's local cache isnt working for us... What are we doing wrong?

    - by Olly
    We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated. It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working. We have enabled it like this... bool UseLocalCache = int LocalCacheObjectCount = int.MaxValue; TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3); DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased; if (UseLocalCache) { configuration.LocalCacheProperties = new DataCacheLocalCacheProperties( LocalCacheObjectCount, LocalCacheDefaultTimeout, LocalCacheInvalidationPolicy ); // configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300)); } Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing). The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache. Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly. We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either. I cant find anything in SO or Google. Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think? (note: the IIS box running ASp.net isnt in yhe cluster - it is just the client). Any insights greatfully received!

    Read the article

  • Problems with variadic function

    - by morpheous
    I have the following function from some legacy code that I am maintaining. long getMaxStart(long start, long count, const myStruct *s1, ...) { long i1, maxstart; myStruct *s2; va_list marker; maxstart = start; /*BUGFIX: 003 */ /*(va_start(marker, count);*/ va_start(marker, s1); for (i1 = 1; i1 <= count; i1++) { s2 = va_arg(marker, myStruct *); /* <- s2 is assigned null here */ maxstart = MAX(maxstart, s2->firstvalid); /* <- SEGV here */ } va_end(marker); return (maxstart); } When the function is called with only one myStruct argument, it causes a SEGV. The code compiled and run without crashing on Windows XP when I compiled it using VS2005. I have now moved the code to Ubuntu Karmic and I am having problems with the stricter compiler on Linux. Is anyone able to spot what is causing the parameter not to be read correctly in the var_arg() statement? I am compiling using gcc version 4.4.1 Edit The statement that causes the SEGV is this one: start = getMaxStart(start, 1, ms1); The variables 'start' and 'ms1' have valid values when the code execution first reaches this line.

    Read the article

  • TCP Socket.Connect is generating false positives

    - by Mark
    I'm experiencing really weird behavior with the Socket.Connect method in C#. I am attempting a TCP Socket.Connect to a valid IP but closed port and the method is continuing as if I have successfully connected. When I packet sniffed what was going on I saw that the app was receiving RST packets from the remote machine. Yet from the tracing that is in place it is clear that the connect method is not throwing an exception. Any ideas what might be causing this? The code that is running is basically this IPEndPoint iep = new IPEndPoint(System.Net.IPAddress.Parse(m_ipAddress), m_port); Socket tcpSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); tcpSocket.Connect(iep); To add to the mystery... when running this code in a stand alone console application, the result is as expected – the connect method throws an exception. However, when running it in the Windows Service deployment we have the connect method does not throw an exception. Edit in response to Mystere Man's answer How would the exception be swallowed? I have a Trace.WriteLine right above the .Connect method and a Trace.WriteLine right under it (not shown in the code sample for readability). I know that both traces are running. I also have a try catch around the whole thing which also does a Trace.Writeline and I don't see that in the log files anywhere. I have also enabled the internal socket tracing as you suggested. I don't see any exceptions. I see what appears to be successful connections. I am trying to identify differences between the windows service app and the diagnostic console app I made. I am running out of ideas though End edit Thanks

    Read the article

  • tcl_findLibrary doesn't work.

    - by user437815
    I'm trying to run freshly compiled program, written with Tcl&Tk. When running it I get an error: felix@Astroserver:~/?????????/Simon$ sudo ./simon1 invalid command name "tcl_findLibrary" I'm running Ubuntu 11.04, I have installed Tcl&Tk (cause I was able to succesfully build the program). If I'm running wish: *% tcl_findLibrary wrong # args: should be "tcl_findLibrary basename version patch initScript enVarName varName"* Could anyone help?

    Read the article

  • SSIS - Connection Management Within a Loop

    - by Rob Bowman
    Hi I have the following SSIS package: The problem is that within the Foreach loop a connection is opened and closed for each iteration. On running SQL Profiler I see a series of: Audit Login RPC:Completed Audit Lout The duration for the login and the RPC that actually does the work is minimal. However, the duration for the logout is significant, running into several seconds each. This causes the JOB to run very slowly - taking many hours. I get the same problem when running either on a test server or stand-alone laptop. Could anyone please suggest how I may change the package to improve performance? Also, I have noticed that when running the package from Visual Studio, it looks as though it continues to run with the component blocks going amber then green but actually all the processing has been completed and SQL profiler has dropped silent? Thanks, Rob.

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >