Search Results

Search found 15835 results on 634 pages for 'static routes'.

Page 359/634 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • Why does explorer restart automatically when I kill it with Process.Kill?

    - by Thomas Levesque
    If I kill explorer.exe like this: private static void KillExplorer() { var processes = Process.GetProcessesByName("explorer"); Console.Write("Killing Explorer... "); foreach (var process in processes) { process.Kill(); process.WaitForExit(); } Console.WriteLine("Done"); } It restarts immediately. But if I use taskkill /F /IM explorer.exe, or kill it from the task manager, it doesn't restart. Why is that? What's the difference? How can I close explorer.exe from code without restarting it? Sure, I could call taskkill from my code, but I was hoping for a cleaner solution...

    Read the article

  • Cancelling Route Navigation in AngularJS Controllers

    - by dwahlin
    If you’re new to AngularJS check out my AngularJS in 60-ish Minutes video tutorial or download the free eBook. Also check out The AngularJS Magazine for up-to-date information on using AngularJS to build Single Page Applications (SPAs). Routing provides a nice way to associate views with controllers in AngularJS using a minimal amount of code. While a user is normally able to navigate directly to a specific route, there may be times when a user triggers a route change before they’ve finalized an important action such as saving data. In these types of situations you may want to cancel the route navigation and ask the user if they’d like to finish what they were doing so that their data isn’t lost. In this post I’ll talk about a technique that can be used to accomplish this type of routing task.   The $locationChangeStart Event When route navigation occurs in an AngularJS application a few events are raised. One is named $locationChangeStart and the other is named $routeChangeStart (there are other events as well). At the current time (version 1.2) the $routeChangeStart doesn’t provide a way to cancel route navigation, however, the $locationChangeStart event can be used to cancel navigation. If you dig into the AngularJS core script you’ll find the following code that shows how the $locationChangeStart event is raised as the $browser object’s onUrlChange() function is invoked:   $browser.onUrlChange(function (newUrl) { if ($location.absUrl() != newUrl) { if ($rootScope.$broadcast('$locationChangeStart', newUrl, $location.absUrl()).defaultPrevented) { $browser.url($location.absUrl()); return; } $rootScope.$evalAsync(function () { var oldUrl = $location.absUrl(); $location.$$parse(newUrl); afterLocationChange(oldUrl); }); if (!$rootScope.$$phase) $rootScope.$digest(); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The key part of the code is the call to $broadcast. This call broadcasts the $locationChangeStart event to all child scopes so that they can be notified before a location change is made. To handle the $locationChangeStart event you can use the $rootScope.on() function. For this example I’ve added a call to $on() into a function that is called immediately after the controller is invoked:   function init() { //initialize data here.. //Make sure they're warned if they made a change but didn't save it //Call to $on returns a "deregistration" function that can be called to //remove the listener (see routeChange() for an example of using it) onRouteChangeOff = $rootScope.$on('$locationChangeStart', routeChange); } This code listens for the $locationChangeStart event and calls routeChange() when it occurs. The value returned from calling $on is a “deregistration” function that can be called to detach from the event. In this case the deregistration function is named onRouteChangeOff (it’s accessible throughout the controller). You’ll see how the onRouteChangeOff function is used in just a moment.   Cancelling Route Navigation The routeChange() callback triggered by the $locationChangeStart event displays a modal dialog similar to the following to prompt the user:     Here’s the code for routeChange(): function routeChange(event, newUrl) { //Navigate to newUrl if the form isn't dirty if (!$scope.editForm.$dirty) return; var modalOptions = { closeButtonText: 'Cancel', actionButtonText: 'Ignore Changes', headerText: 'Unsaved Changes', bodyText: 'You have unsaved changes. Leave the page?' }; modalService.showModal({}, modalOptions).then(function (result) { if (result === 'ok') { onRouteChangeOff(); //Stop listening for location changes $location.path(newUrl); //Go to page they're interested in } }); //prevent navigation by default since we'll handle it //once the user selects a dialog option event.preventDefault(); return; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Looking at the parameters of routeChange() you can see that it accepts an event object and the new route that the user is trying to navigate to. The event object is used to prevent navigation since we need to prompt the user before leaving the current view. Notice the call to event.preventDefault() at the end of the function. The modal dialog is shown by calling modalService.showModal() (see my previous post for more information about the custom modalService that acts as a wrapper around Angular UI Bootstrap’s $modal service). If the user selects “Ignore Changes” then their changes will be discarded and the application will navigate to the route they intended to go to originally. This is done by first detaching from the $locationChangeStart event by calling onRouteChangeOff() (recall that this is the function returned from the call to $on()) so that we don’t get stuck in a never ending cycle where the dialog continues to display when they click the “Ignore Changes” button. A call is then made to $location.path(newUrl) to handle navigating to the target view. If the user cancels the operation they’ll stay on the current view. Conclusion The key to canceling routes is understanding how to work with the $locationChangeStart event and cancelling it so that route navigation doesn’t occur. I’m hoping that in the future the same type of task can be done using the $routeChangeStart event but for now this code gets the job done. You can see this code in action in the Customer Manager application available on Github (specifically the customerEdit view). Learn more about the application here.

    Read the article

  • How to make ssh/rsync/etc use a VLAN network interface?

    - by Annan
    A company I work for has a number of virtual servers with ElasticHosts. They are setup in such a way that eth1 is on a private VLAN connecting them to each other. This is so backups sent between servers are not charged at the same rate as external data transfer. My understanding of how VLANs and network interfaces work is sketchy at best. How can I make ssh, rsync, etc. transfer data through the VLAN? My final solution: I spent a while trying to figure this out, For all servers involved, edit /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 BOOTPROTO=static ONBOOT=yes HWADDR=YOUR_MAC_ADDR IPADDR=192.168.0.100 NETMASK=255.255.255.0 Where HWADDR should already be set and the last octate of IPADDR should be different from each other. Then run, on all servers /etc/init.d/network restart After this the IP addresses specified by IPADDR can be used directly as any other IP address.

    Read the article

  • Connect to wired and wireless networks at same time, Ubuntu

    - by Gary Chambers
    Currently, I have a media PC running Ubuntu 10.04 that I am trying to connect via a wired network cable directly to a NAS box, and wirelessly to the router. This works no problem after I run sudo /etc/init.d/networking restart but I can't get both interfaces to come up on system startup. My /etc/network/interfaces file reads as follows: auto eth0 iface eth0 inet static address 10.0.1.2 netmask 255.255.254.0 broadcast 10.0.1.255 network 10.0.1.0 auto wlan2 iface wlan2 inet dhcp As I say, I know this works, because I can get it to work by restarting the network interfaces, but I can't bring them both up on system startup. Does anyone know why this might be?

    Read the article

  • Cannot execute Java program: UnsupportedClassVersionError

    - by Ricko Devian
    I have installed JDK 6, but I can't execute a Java program. For example, I have made test.java. I compile it with javac tes.java and there's no error when I compile it, but when I want to execute that program it always displays an error. I execute the Java program with java tes. Exception in thread "main" java.lang.UnsupportedClassVersionError: tes : Unsupported major.minor version 51.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:634) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:277) at java.net.URLClassLoader.access$000(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:212) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) Could not find the main class: tes. Program will exit. My javac version is 1.7.0, my java version is 1.6.0. Here is my tes.java code: class tes{ public static void main(String[]args){ System.out.println("hello"); } }

    Read the article

  • Access functions from user control without events?

    - by BornToCode
    I have an application made with usercontrols and a function on main form that removes the previous user controls and shows the desired usercontrol centered and tweaked: public void DisplayControl(UserControl uControl) I find it much easier to make this function static or access this function by reference from the user control, like this: MainForm mainform_functions = (MainForm)Parent; mainform_functions.DisplayControl(uc_a); You probably think it's a sin to access a function in mainform, from the usercontrol, however, raising an event seems much more complex in such case - I'll give a simple example - let's say I raise an event from usercontrol_A to show usercontrol_B on mainform, so I write this: uc_a.show_uc_b+= (s,e) => { usercontrol_B uc_b = new usercontrol_B(); DisplayControl(uc_b); }; Now what if I want usercontrol_B to also have an event to show usercontrol_C? now it would look like this: uc_a.show_uc_b+= (s,e) => { usercontrol_B uc_b = new usercontrol_B(); DisplayControl(uc_b); uc_b.show_uc_c += (s2,e2) => {usercontrol_C uc_c = new usercontrol_C(); DisplayControl(uc_c);} }; THIS LOOKS AWFUL! The code is much simpler and readable when you actually access the function from the usercontrol itself, therefore I came to the conclusion that in such case it's not so terrible if I break the rules and not use events for such general function, I also think that a readable usercontrol that you need to make small adjustments for another app is preferable than a 100% 'generic' one which makes my code look like a pile of mud. What is your opinion? Am I mistaken?

    Read the article

  • Better solution then simple factory method when concrete implementations have different attributes

    - by danip
    abstract class Animal { function eat() {..} function sleep() {..} function isSmart() } class Dog extends Animal { public $blnCanBark; function isSmart() { return $this->blnCanBark; } } class Cat extends Animal { public $blnCanJumpHigh; function isSmart() { return $this->blnCanJumpHigh; } } .. and so on up to 10-20 animals. Now I created a factory using simple factory method and try to create instances like this: class AnimalFactory { public static function create($strName) { switch($strName) { case 'Dog': return new Dog(); case 'Cat': return new Cat(); default: break; } } } The problem is I can't set the specific attributes like blnCanBark, blnCanJumpHigh in an efficient way. I can send all of them as extra params to create but this will not scale to more then a few classes. Also I can't break the inheritance because a lot of the basic functionality is the same. Is there a better pattern to solve this?

    Read the article

  • How to mange big amount users at server side?

    - by Rami
    I built a social android application in which users can see other users around them by gps location. at the beginning thing went well as i had low number of users, But now that I have increasing number of users (about 1500 +100 every day) I revealed a major problem in my design. In my Google App Engine servlet I have static HashMap that holding all the users profiles objects, currenty 1500 and this number will increase as more users register. Why I'm doing it Every user that requesting for the users around him compares his gps with other users and check if they are in his 10km radius, this happens every 5 min on average. That is why I can't get the users from db every time because GAE read/write operation quota will tare me apart. The problem with this desgin is As the number of users increased the Hashmap turns to null every 4-6 hours, I thing that this time is getting shorten but I'm not sure. I'm fixing this by reloading the users from the db every time I detect that it became null, But this causes DOS to my users for 30 sec, So I'm looking for better solution. I'm guessing that it happens because the size of the hashmap, Am I right? I have been advised to use spatial database, but that mean that I can't work with GAE any more and that mean that I need to build my big server all over again and lose my existing DB. Is there something I can do with the existing tools? Thanks.

    Read the article

  • Status code in nginx try_files directive

    - by Hamish
    Is it possible to use the current status code as a parameter in try_files? For example, we try to provide a host specific 503 static response, or a server-wide fallback if it wasn't found: error_page 503 @error503; location @error503 { root /path_to_static_root/; try_files /$host/503.html /503.html =503; } There are a number of these directives, so it would be convenient to do something like: error_page 404 @error error_page 500 @error error_page 503 @error location @error { root /path_to_static_root/; try_files /$host/$status.html /$status.html =$status; } But the Variables documentation doesn't list anything that we could use to do this. Is it possible, or is there an alternative way to do this?

    Read the article

  • Rolling Back Microsoft CRM during testing

    - by npeterson
    Process related question: Currently we have a multi-tenant installation of MS CRM 4.0 on three servers, Dev, Test, and Live. We are actively working on customizing one of the tenants, but the others are static. During user testing, we often find it necessary to 'start fresh' in one of the tenants. Is it better to try and delete out the changes from the tenant (created accounts, leads, etc), or just revert the database to a backup from before the testing started? Is there compelling reasons why bulk delete is not advisable for MSCRM or that reverting the database frequently could cause issue?

    Read the article

  • Dynamic procmail filters

    - by WombaT
    i need procmail to place incoming mail into specific folder depending on some set of rules. I know how i can accomplish this, but i need to write static set of rules in a specific file. What i really need is to configure procmail to use rules stored in mysql database. How i can do this? I've read a bit about that and one solution i found is to pipe message to a php/perl script and return a folder name to place message. But i have completely no i idea how to use php script as a rule and then use its return value.

    Read the article

  • Solved: Operation is not valid due to the current state of the object

    - by ChrisD
    We use public static methods decorated with [WebMethod] to support our Ajax Postbacks.   Recently, I received an error from a UI developing stating he was receiving the following error when attempting his post back: {   "Message": "Operation is not valid due to the current state of the object.",   "StackTrace": "   at System.Web.Script.Serialization.ObjectConverter.ConvertDictionaryToObject(IDictionary`2 dictionary, Type type, JavaScriptSerializer serializer, Boolean throwOnError, Object& convertedObject)\r\n   at System.Web.Script.Serialization.ObjectConverter.ConvertObjectToTypeInternal(Object o, Type type, JavaScriptSerializer serializer, Boolean throwOnError, Object& convertedObject)\r\n   at System.Web.Script.Serialization.ObjectConverter.ConvertObjectToTypeMain(Object o, Type type, JavaScriptSerializer serializer, Boolean throwOnError, Object& convertedObject)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeDictionary(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeDictionary(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n   at System.Web.Script.Serialization.JavaScriptObjectDeserializer.BasicDeserialize(String input, Int32 depthLimit, JavaScriptSerializer serializer)\r\n   at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize(JavaScriptSerializer serializer, String input, Type type, Int32 depthLimit)\r\n   at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize[T](String input)\r\n   at System.Web.Script.Services.RestHandler.GetRawParamsFromPostRequest(HttpContext context, JavaScriptSerializer serializer)\r\n   at System.Web.Script.Services.RestHandler.GetRawParams(WebServiceMethodData methodData, HttpContext context)\r\n   at System.Web.Script.Services.RestHandler.ExecuteWebServiceCall(HttpContext context, WebServiceMethodData methodData)",   "ExceptionType": "System.InvalidOperationException" }   Goggling this error brought me little support.  All the results talked about increasing the aspnet:MaxJsonDeserializerMembers value to handle larger payloads.  Since 1) I’m not using the asp.net ajax model and 2) the payload is very small, this clearly was not the cause of my issue. Here’s the payload the UI developer was sending to the endpoint: {   "FundingSource": {     "__type": "XX.YY.Engine.Contract.Funding.EvidenceBasedFundingSource,  XX.YY.Engine.Contract",     "MeansType": 13,     "FundingMethodName": "LegalTender",   },   "AddToProfile": false,   "ProfileNickName": "",   "FundingAmount": 0 } By tweaking the JSON I’ve found the culprit. Apparently the default JSS Serializer used doesn’t like the assembly name in the __type value.  Removing the assembly portion of the type name resolved my issue. { "FundingSource": { "__type": "XX.YY.Engine.Contract.Funding.EvidenceBasedFundingSource", "MeansType": 13, "FundingMethodName": "LegalTender", }, "AddToProfile": false, "ProfileNickName": "", "FundingAmount": 0 }

    Read the article

  • Cat 6 Only 100mbit speed

    - by Stu2000
    I tried two different cat6 cables directly connected between my two ubuntu machines. This one I ordered online: http://www.amazon.co.uk/gp/product/B002SQPDXS/ref=wms_ohs_product only achieves 100mbit speeds, but does appear to be supporting cross-talk (direct pc to pc), the other cat 6 cable, worked perfectly and gets the full 1gigabit speed. Both tests were performed using ftp and checking the network monitor with direct pc to pc connection. Did the product from amazon lie to me or do I need to manually set a setting somewhere in ubuntu for some cables? I had thought 10 quid for 20m of gigabit ethernet cable was a bit cheap, you get what you pay for... Regards, Stu Update: It seems that after rebooting, the device is set to 1000mbit sec when looking it up with sudo ethtool eth0 However after a while, this will drop down to just 100, after which to reset it to 1000 again, I have to reboot, and simply unpugging and re-plugging in the cable doesn't do it. I tried setting this in networking config file as suggested here: auto eth0 iface eth0 inet static pre-up /usr/sbin/ethtool -s eth0 speed 1000 duplex full but that resulted in my networking failing to start. Is there a problem with my 'auto-negotiation' or something? Can I manually override a setting to 1000mbit?

    Read the article

  • How can I resolve component types in a way that supports adding new types relatively easily?

    - by John
    I am trying to build an Entity Component System for an interactive application developed using C++ and OpenGL. My question is quite simple. In my GameObject class I have a collection of Components. I can add and retrieve components. class GameObject: public Object { public: GameObject(std::string objectName); ~GameObject(void); Component * AddComponent(std::string name); Component * AddComponent(Component componentType); Component * GetComponent (std::string TypeName); Component * GetComponent (<Component Type Here>); private: std::map<std::string,Component*> m_components; }; I will have a collection of components that inherit from the base Components class. So if I have a meshRenderer component and would like to do the following GameObject * warship = new GameObject("myLovelyWarship"); MeshRenderer * meshRenderer = warship->AddComponent(MeshRenderer); or possibly MeshRenderer * meshRenderer = warship->AddComponent("MeshRenderer"); I could be make a Component Factory like this: class ComponentFactory { public: static Component * CreateComponent(const std::string &compTyp) { if(compTyp == "MeshRenderer") return new MeshRenderer; if(compTyp == "Collider") return new Collider; return NULL; } }; However, I feel like I should not have to keep updating the Component Factory every time I want to create a new custom Component but it is an option. Is there a more proper way to add and retrieve these components? Is standard templates another solution?

    Read the article

  • Scaling up an apache server

    - by pehrs
    I have an Ubuntu server running apache2 which i expect to be hit by around 500-1000 (concurent) users for a limited amount of time. The server serves a mixture of custom (rather light) php pages connected to a postgresql db (around 20 Mb in size) and static content. The hardware is stable and pretty beefy: Intel Xeon E5420 @ 2.5 GHz 12 GB RAM During previous rushes on this server I have increased ServerLimit, the MaxClients for the MPM modules and decreased Timeout and KeepAliveTimeout. It has worked, but been sluggish and I have a feeling more can be done. How would you suggest configuring the Apache server to handle this kind of load?

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • Best solution for getting referral information in PHP

    - by absentx
    I am currently redoing some link structuring on a website. In the past we have used specific php files on the last step to direct the user to the proper place. Example: www.mysite.com/action/go-to-blue.php or www.mysite.com/action/short/go-to-red.php www.mysite.com/action/tall/go-to-red.php We are now restructuring to eliminate the /short/ or /tall/ directory. What this means is now "go-to-blue.php" will be doing some extra processing to make sure it sends the visitor to the proper place. The static method of the past was quite effective, because, well, if they left from that page we knew we had it right. Now since we are 301 redirecting action/short/go-to-red.php to just action/go-to-red.php it is quite important on "go-to-red.php" that we realize a user may have been redirected from /short/ or /tall/. So right now I am using HTTP_REFERRER and of course in my testing that works fine, but after a lot of reading it is clear that this is not a solid solution, so I was starting to brainstorm on other ways to check and make sure we get the proper referral information. If we could check HTTP_REFERRER plus some other test, I would feel confident we have a pretty good system in place to send the visitor to the right place. Some questions/comments: Could I use a session variable or a cookie to accomplish this goal? If so, would that be maintained through the 301 redirect? I don't see why it wouldn't be.. Passing the url in the url is not an option in this case.

    Read the article

  • Hyper-V Server hvremote.wsf Script - ns lookup for DNS Verification test fails

    - by Vazgen
    I'm trying to connect my Hyper-V Server to a Windows 8 client for remote management. I have: Joined server to WORKGROUP Enabled Remote Management Set the server name Set a static IP Set the DNS servers to my ISPs DNS Servers (same as default DNS Servers on my Windows 8 remote management client) Set the correct time zone Created net user on server (net user /add admin password) Added user to special Administrators group on server (hvremote /add:admin) Granted anonymous dcom access on client using hvremote However, the "ns lookup for DNS verification" fails on both the client and server with the same error: Server: my.isps.server.name.net Address: 111.222.333.1 *** my.isps.server.name.net can't find 192.168.1.3: Non-existent domain Thanks for the help.

    Read the article

  • Exchange Mail Flow

    - by Tuck918
    Hello. I have a question. We have one Exchange 2003 server and two Exchange 2007 servers. Most all of our mailboxes are on 2007 but we do still have one shared mailbox, unity mailbox and a journling mailbox on 2003. Public Folders have been set to replicate to 2007. I have set up a send connector on 2007 with a cost of 1. Receive connectors have Anonymous Users checked on 2007. On 2003 there are two connectors: the Internet Email connector and the connector that connects 2003 to 2007. We have a SPAM filtering device that email goes through before it is handed off to Exchange. The SPAM filtering device is set to send email to one of our Exchange 2007 servers. Here is my question/problem: Even though the SPAM filtering device is set to forward email to Exchange 2007, somehow all of our email is still going through the Exchange 2003 server before it finally hits the users mailboxes on the Exchange 2007 server. How can I change it so that all email goes directly to Exchange 2007 and never routes through Excahnge 2003 both ways, inbound and outbound? Would also like to add: In the EMC under Org- Hub- Send Connector there are two connectors. One is the "Internet Connector" from the 2003 box and the other is the new one I created. THe address space on the 2003 one is set to a cost of 2, no smart hosts and the 2003 box is listed as the Source Server. THe other Send Connector has an address space of 1, no smart host and has the 2 excahnge 2007 servers listed as the source servers. In EMC under Server- Hub- my two exchange 2007 servers are listed. Each one has 2 receive connectors. Both Recieve Connectors are setup the same way. THe Default Receive Connector has Anonymous Users checked. The other Recieve Connector is labled "Client" and I am not sure what it does or why its there. Anonymous Users are not checked. No smart hosts configured on 2003. Additional details Currently we have 3 excahnge servers. One exchange 2003 server and two excahnge 2007 servers. THe exchange 2003 server is the acting "bridgehead" serverand all email is routing through this server, inbound and outbound. We are wanting to decommission this server and use our two exchange 2007 servers as our mailbox servers. All of of user mailboxes are already on one of the exchange 2007 boxes and we want to put whats left on the exchange 2003 box on our other excahnge 2007 box. Both excahnge 2007 servers are currently CAS, HT and MB servers. We have a SPAM filtering device that sits between our excahnge servers and the firewall and have it configured to send messages to one of the excahgne 2007 servers but when we look at the message headers we can see that messgaes are still being routed to the excahnge 2003 box. We want to bypass the exchange 2003 in the routing process as it is dying and is starting to have major issues so everytime it goes down our email is down. Is there possible some sort of AD routing link/site link stuff going on?

    Read the article

  • How to do dependency Injection and conditional object creation based on type?

    - by Pradeep
    I have a service endpoint initialized using DI. It is of the following style. This end point is used across the app. public class CustomerService : ICustomerService { private IValidationService ValidationService { get; set; } private ICustomerRepository Repository { get; set; } public CustomerService(IValidationService validationService,ICustomerRepository repository) { ValidationService = validationService; Repository = repository; } public void Save(CustomerDTO customer) { if (ValidationService.Valid(customer)) Repository.Save(customer); } Now, With the changing requirements, there are going to be different types of customers (Legacy/Regular). The requirement is based on the type of the customer I have to validate and persist the customer in a different way (e.g. if Legacy customer persist to LegacyRepository). The wrong way to do this will be to break DI and do somthing like public void Save(CustomerDTO customer) { if(customer.Type == CustomerTypes.Legacy) { if (LegacyValidationService.Valid(customer)) LegacyRepository.Save(customer); } else { if (ValidationService.Valid(customer)) Repository.Save(customer); } } My options to me seems like DI all possible IValidationService and ICustomerRepository and switch based on type, which seems wrong. The other is to change the service signature to Save(IValidationService validation, ICustomerRepository repository, CustomerDTO customer) which is an invasive change. Break DI. Use the Strategy pattern approach for each type and do something like: validation= CustomerValidationServiceFactory.GetStratedgy(customer.Type); validation.Valid(customer) but now I have a static method which needs to know how to initialize different services. I am sure this is a very common problem, What is the right way to solve this without changing service signatures or breaking DI?

    Read the article

  • Handy Tool for Code Cleanup: Automated Class Element Reordering

    - by Geertjan
    You're working on an application and this thought occurs to you: "Wouldn't it be cool if I could define rules specifying that all static members, initializers, and fields should always be at the top of the class? And then, whenever I wanted to, I'd start off a process that would actually do the reordering for me, moving class elements around, based on the rules I had defined, automatically, across one or more classes or packages or even complete code bases, all at the same time?" Well, here you go: That's where you can set rules for the ordering of your class members. A new hint (i.e., new in NetBeans IDE 7.3), which you need to enable yourself because by default it is disabled, let's the IDE show a hint in the Java Editor whenever there's code that isn't ordered according to the rules you defined: The first element in a file that the Java Editor identifies as not matching your rules gets a lightbulb hint shown in the left sidebar: Then, when you click the lightbulb, automatically the file is reordered according to your defined rules. However, it's not much fun going through each file individually to fix class elements as shown above. For that reason, you can go to "Refactor | Inspect and Transform". There, in the "Inspect and Transform" dialog, you can choose the hint shown above and then specify that you'd like it to be applied to a scope of your choice, which could be a file, a package, a project, combinations of these, or all of the open projects, as shown below: Then, when Inspect is clicked, the Refactoring window shows all the members that are ordered in ways that don't conform to your rules: Click "Do Refactoring" above and, in one fell swoop, all the class elements within the selected scope are ordered according to your rules.

    Read the article

  • OpenVPN - client-to-client traffic working in one direction but not the other

    - by user42055
    I have the following VPN configuration: +------------+ +------------+ +------------+ | outpost |----------------| kino |----------------| guchuko | +------------+ +------------+ +------------+ OS: FreeBSD 6.2 OS: Gentoo 2.6.32 OS: Gentoo 2.6.33.3 Keyname: client3 Keyname: server Keyname: client1 eth0: 10.0.1.254 eth0: 203.x.x.x eth0: 192.168.0.6 tun0: 192.168.150.18 tun0: 192.168.150.1 tun0: 192.168.150.10 P-t-P: 192.166.150.17 P-t-P: 192.168.150.2 P-t-P: 192.168.150.9 Kino is the server and has client-to-client enabled. All three machines have ip forwarding enabled, by this on the gentoo boxes: net.ipv4.conf.all.forwarding = 1 And this on the FreeBSD box: net.inet.ip.forwarding: 1 In the server's "ccd" directory is the following files: client1: iroute 192.168.0.0 255.255.255.0 client3: iroute 10.0.1.0 255.255.255.0 The server config has these routes configured: push "route 192.168.0.0 255.255.255.0" push "route 10.0.1.0 255.255.255.0" route 192.168.0.0 255.255.255.0 route 10.0.1.0 255.255.255.0 Kino's routing table looks like this: 192.168.150.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.0.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.150.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Outpost's like this: 192.168.150 192.168.150.17 UGS 0 17 tun0 192.168.0 192.168.150.17 UGS 0 2 tun0 192.168.150.17 192.168.150.18 UH 3 0 tun0 And Guchuko's like this: 192.168.150.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 192.168.150.9 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Now, the tests. Pings from Guchuko to Outpost's LAN IP work OK, as does the reverse - pings from Outpost to Guchuko's LAN IP. However... Pings from Outpost, to a machine on Guchuko's LAN work fine: .(( root@outpost )). (( 06:39 PM )) :: ~ :: # ping 192.168.0.3 PING 192.168.0.3 (192.168.0.3): 56 data bytes 64 bytes from 192.168.0.3: icmp_seq=0 ttl=63 time=462.641 ms 64 bytes from 192.168.0.3: icmp_seq=1 ttl=63 time=557.909 ms But a ping from Guchuko, to a machine on Outpost's LAN does not: .(( root@guchuko )). (( 06:43 PM )) :: ~ :: # ping 10.0.1.253 PING 10.0.1.253 (10.0.1.253) 56(84) bytes of data. --- 10.0.1.253 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2000ms Guchuko's tcpdump of tun0 shows: 18:46:27.716931 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 1, length 64 18:46:28.716715 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 2, length 64 18:46:29.716714 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 Outpost's tcpdump on tun0 shows: 18:44:00.333341 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 18:44:01.334073 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 4, length 64 18:44:02.331849 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 5, length 64 So Outpost is receiving the ICMP request destined for the machine on it's subnet, but appears not be forwarding it. Outpost has gateway_enable="YES" in its rc.conf which correctly sets net.inet.ip.forwarding to 1 as mentioned earlier. As far as I know, that's all that's required to make a FreeBSD box forward packets between interfaces. Is there something else I could be forgetting ?

    Read the article

  • What is the safest and least expensive way to store 10 terabytes of data?

    - by Josh T
    I'm a member of a production company and we're preparing for our first feature film. We've been discussing methods of data storage to keep all of our original content safe (for as long as possible). While we understand data is never 100% safe, we'd like to find the safest solution for us. We've considered: 16TB NAS for on-site storage 4-5 2TB hard drives (cheap, but not redundant), copy original footage to drives then seal in static free bag Burn data to Blu-Ray disks (time consuming and expensive: 200 disks == $5000) Tape drive(s)? I know the least about tape drives, except the fact that they're more reliable than disks. Any experience/knowledge with this amount of data is hugely appreciated.

    Read the article

  • Wireless DHCP doesn't work until wired Ethernet plugged in

    - by MT_Head
    A client of mine has an Asus R1F tablet running Windows XP Tablet SP3. It has an Intel 3945ABG wireless card; wired Ethernet is a Realtek something-or-other. In the past few days, it's developed an odd problem: WiFi authenticates, but can't get an address via DHCP. plug in wired Ethernet - both interfaces get good addresses unplug cable, WiFi continues to work until shutdown. Next morning, repeat process. I've tried: turning WiFi off/on (there's a slider switch) disabling/re-enabling via Device Mangler uninstalling and reinstalling the driver for the 3945ABG... changing from Intel Pro/SET to Windows Wireless Zero Config (and back) restarting the router changing the static DHCP assignments at the router upgrading the router firmware, just on general principles The router/access point is pfSense 1.2.3RC1 (was 1.2.2); wireless card is Atheros-based. None of the 12 other users (5 with tablets) are having problems.

    Read the article

  • Why do I get "General Failure" when pinging host name on a Win 7 node on the network?

    - by hydroparadise
    This is a very peculiar problem with a station on our network. The client pc is running Windows 7 Pro. What makes this problem interesting is that this client is the only node on the network that seems to be experiencing this proglem. When I try to ping a specific Win 08 server by host name, I get an IPv6 address and get General failure. But when I ping it's IPv4 address, it responds just fine. My first thought would check the DNS server the name resolutions to see what would be going on, but the problem begs the quesion, why does the station get an IPv6 address back and fails as opposed to using the IPv4 settings (which are static btw). What gives? I am including a screen shot of trying the one specific server and failing while trying another server with success. All other nodes on the network don't have problems communicating with the server the one station is having issues with.

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >