Search Results

Search found 30252 results on 1211 pages for 'network programming'.

Page 919/1211 | < Previous Page | 915 916 917 918 919 920 921 922 923 924 925 926  | Next Page >

  • On an unencrypted public wi-fi hotspot, what exactly is a packet sniffer doing to get another computer's packet?

    - by hal10001
    I get mixed results when reading information security articles, some of them stating that in order to do something similar you need to also setup some sort of honeypot with a running access point and local Web server to intercept traffic. Then other articles seem to indicate you don't need that, and you can just run Wireshark, and it will detect all packets being sent on the network. How could that be, and what exactly is a packet sniffer doing to get those packets? Does this involve intercepting wireless signals transmitted over the wireless protocol and frequency via the NIC on the computer running a program like Wireshark?

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Diagnosing high CPU waiting

    - by Will
    I have a monitoring server that is running icinga/collectd/graphite with about 50 hosts. I have noticed high load/slugging performance on the box. If you take a look at top, you'll see: Cpu(s): 0.6%us, 0.2%sy, 0.0%ni, 7.6%id, 23.4%wa, 0.0%hi, 0.2%si, 0.0%st Notice the HUGE %wa value, which as far as I know means a network or disk bottleneck. ifconfig shows no dropping packets and there's not a ton of bandwidth going on, so that leaves disk issues, right? There's not a lot of disk writing going on either...iotop is reporting we're only writing a little over 1 MB per second and the RAID tool reports everything is A-OK and write caching is enabled. How do I go about trying to figure out how to fix this? UPDATE: iostat -x output is: avg-cpu: %user %nice %system %iowait %steal %idle 0.62 0.10 0.31 9.65 0.00 89.31 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.21 33.34 83.55 16.54 1599.94 399.07 19.97 43.21 416.98 3.71 37.13

    Read the article

  • Generating Report for NUnit

    - by thangchung
     All source codes for this post can be found at my github.Time ago, I received a request that people ask me how they can generate reports of the results of testing using NUnit? In fact, I may never do this. In the little world of my programming, I only care about the test results, red-green-refactoring, and that was it. When I got that question quite a bit unexpected, I knew that I could use NCover to generate reports, but reports of NCover too simple, it did not give us more details on the number of test cases, test methods, ... And I began to see about creating interesting report for NUnit.I was lucky to find an open source here. Its authors call it NUnit2Report, but one disadvantage is it only running on .NET 1.0. Indeed too old compared to the current version 4.0. And I try to download the preview, but I could not run. I had to open its source code and found that it uses XSLT to convert the output of NUnit results from XML to HTML. Nothing really special, because I also knew that after NUnit run output file extension is XML is created. Author only use this file to convert to HTML using XSLT. And I decided to convert it to. NET 4.0, because I will not have to code from scratch. Conversion work made me take some time, but was lucky that I finally have what I want. Thanks Gilles for the this OSS. I will send a mail to thank him for his efforts but put this out for the OSS. Now I will show people how to do it. I used the auto built NAnt and NUnit for running TestCase, and I use Selenium testing framework. After writing three TestCase using Selenium, I ran NUnit, and got the following results: There are 1 fail and 2s success. In the bin directory of this project will have the NUnit output file as shown below: Then I create a build file, and a bat file for easy running (can use PowerShell is here also.) Double click in the bat file to create a report like this:       Finally open the index.html file in the folder to view report. As everyone can see, it is the TestCase and divide very clearly, that I meet the requirements. This is really good. Once again I really thank NUnit2Report from Gilles. People can contact him via the mail address [email protected] or website  http://nunit2report.sourceforge.net. It really is useful to those who promised to QA. Hopefully this post will help anyone really interested in doing reports for NUnit.   

    Read the article

  • Suspending my laptop breaks ethernet over firewire, are there commands which can fix it?

    - by Josh
    As mentioned in this question I am using a firewire cable to provide a private network between my laptop and my desktop, because it makes using the screen sharing program synergy much nicer than using WIFI. However when I leave my office for the day and I suspend my laptop, when I return the next day, the desktop and the laptop cannot communicate over firewire anymore. The firewire0 device still has an IP address. but when I try and ping the desktop I get no route to host I'm using kernel 2.6.35-24-generic #42-Ubuntu SMP x86_64 on Ubuntu 10.10. Is there some way I can remedy this without a reboot? Like, removing some kernel modules and re-inserting them? Here's what I have tried so far and the results: root@token:~# dmesg|tail -n 1 [592525.204024] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 root@token:~# modprobe -r firewire_net firewire_ohci firewire_core root@token:~# modprobe -v firewire_ohci insmod /lib/modules/2.6.35-24-generic/kernel/lib/crc-itu-t.ko insmod /lib/modules/2.6.35-24-generic/kernel/drivers/firewire/firewire-core.ko insmod /lib/modules/2.6.35-24-generic/kernel/drivers/firewire/firewire-ohci.ko root@token:~# dmesg|tail [592525.204024] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [592563.410868] firewire_ohci: Removed fw-ohci device. [592579.160086] firewire_ohci: Added fw-ohci device 0000:02:00.0, OHCI v1.10, 4 IR + 8 IT contexts, quirks 0x2 [592579.160137] firewire_ohci: isochronous cycle inconsistent [592579.660294] firewire_core: created device fw0: GUID 0000000000000000, S400 [592579.663805] firewire_core: created device fw1: GUID 0017f2fffe89bce6, S400 [592579.663813] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [592579.700720] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [592579.700842] firewire_core: refreshed device fw0 [592579.702603] firewire_net: firewire0: IPv4 over FireWire on device 0000000000000000 root@token:~# ping stan.firewire PING stan.firewire (192.168.100.1) 56(84) bytes of data. From token.local (192.168.100.3) icmp_seq=1 Destination Host Unreachable From token.local (192.168.100.3) icmp_seq=2 Destination Host Unreachable From token.local (192.168.100.3) icmp_seq=3 Destination Host Unreachable I also tried removing the modules prior to suspending, and re-inserting after resuming. This did not work either :-(

    Read the article

  • Other Ideas to troubleshoot Cisco IPSec VPN on OSX?

    - by Tawm
    We have one user running OSX Snow Leopard who is having issues staying connected to our VPN running off of an ASA5510. His connection can die even as he's actively pushing traffic across it or if he's been idle for a period of time. Other users on Snow Leopard, Lion, XP, Vista, 7 and various linux flavors are able to stay connected for 24hrs+ without issue We've deleted and remade the connection in System Preferences Networking, ran killall racoon (kills any lingering connections) Below are the logs from the user's system.log from a connect/disconnect cycle: Oct 10 21:22:25 username racoon[8192]: Connecting. Oct 10 21:22:25 username racoon[8192]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 1). Oct 10 21:22:25 username racoon[8192]: IKEv1 Phase1 AUTH: success. (Initiator, Aggressive-Mode Message 2). Oct 10 21:22:25 username racoon[8192]: IKE Packet: receive success. (Initiator, Aggressive-Mode message 2). Oct 10 21:22:25 username racoon[8192]: IKEv1 Phase1 Initiator: success. (Initiator, Aggressive-Mode). Oct 10 21:22:25 username racoon[8192]: IKE Packet: transmit success. (Initiator, Aggressive-Mode message 3). Oct 10 21:22:29 username racoon[8192]: IKE Packet: transmit success. (Mode-Config message). Oct 10 21:22:29 username racoon[8192]: IKEv1 XAUTH: success. (XAUTH Status is OK). Oct 10 21:22:29 username racoon[8192]: IKE Packet: transmit success. (Mode-Config message). Oct 10 21:22:29 username racoon[8192]: IKEv1 Config: retransmited. (Mode-Config retransmit). Oct 10 21:22:29 username racoon[8192]: IKE Packet: receive success. (MODE-Config). Oct 10 21:22:29 username configd[14]: event_callback: Address added. previous interface setting (name: en1, address: 192.168.0.100), current interface setting (name: utun0, family: 1001, address: 10.215.8.53, subnet: 255.0.0.0, destination: 10.215.8.53). Oct 10 21:22:29 username racoon[8192]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 10 21:22:29 username configd[14]: network configuration changed. Oct 10 21:22:29 username racoon[8192]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 10 21:22:29 username racoon[8192]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 10 21:22:29 username racoon[8192]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 10 21:22:29 username racoon[8192]: Connected. Oct 10 21:22:29 username configd[14]: SCNCController: Connected. Oct 10 21:22:29 username racoon[8192]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 10 21:22:29 username racoon[8192]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 10 21:22:29 username racoon[8192]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 10 21:22:29 username racoon[8192]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 10 21:22:47 username login[8200]: USER_PROCESS: 8200 ttys003 Oct 10 21:22:48 username GrowlHelperApp[160]: Periodic CFURLCache Insert stats (iters: 17240) - Tx time:0.001749, # of Inserts: 1, # of bytes written: 304, Did shrink: NO, Size of cache-file: 26624, Num of Failures: 0 Oct 10 21:25:24 username login[7367]: DEAD_PROCESS: 7367 ttys002 Oct 10 21:25:31 username login[7907]: DEAD_PROCESS: 7907 ttys001 Oct 10 21:27:32 username configd[14]: SCNCController: Disconnecting. (Connection was up for, 303 seconds). Oct 10 21:27:32 username racoon[8192]: IKE Packet: transmit success. (Information message). Oct 10 21:27:32 username racoon[8192]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 10 21:27:32 username racoon[8192]: IKE Packet: transmit success. (Information message). Oct 10 21:27:32 username racoon[8192]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 10 21:27:32 username racoon[8192]: IKE Packet: transmit success. (Information message). Oct 10 21:27:32 username racoon[8192]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Oct 10 21:27:32 username racoon[8192]: Disconnecting. (Connection was up for, 302.766105 seconds). Oct 10 21:27:32 username configd[14]: network configuration changed. Oct 10 21:27:34 username login[8200]: DEAD_PROCESS: 8200 ttys003

    Read the article

  • Our New Website Header (& Other Tweaks)

    - by justin.kestelyn
    Last week, the Oracle Technology Network Website went fixed-width. There are several reasons for this, most relating to providing a consistent user experience, easier management of Website content, etc. Furthermore, it's fairly standard for developer portals these days - java.sun.com, MSDN, and IBM DeveloperWorks are also all fixed-width sites. (My apologies to everyone who is unhappy about this change, but it really is an overall positive one.) Today, we have rolled out a brand-new header, the first step in what we call the "Mosaic" project - which is an effort to make the user experience across all Oracle Websites more consistent. To summarize the impact: The "pull-down" menus on the OTN site disappear; most of them move into a "flyout" button in the header. You can access the OTN flyout from any page on Oracle.com or the OTN site. Great for our page views. :) You also have direct access to the Downloads index from anywhere on Oracle.com. If you so desire, you can directly access product overviews, Oracle University and Support info, Oracle Store, etc etc from the OTN site now. Due to limited space in the flyout we cannot accommodate *all* the pull-down items, but they are all no more than 1 or 2 clicks away. This approach has been validated in extensive user testing over the last few months; I welcome your feedback now in comments. There are many other changes in train, with the next one being: A major homepage redesign, the first in 4 or 5 years.

    Read the article

  • Recycle Bin for Windows Server 2003 File Shares

    - by Joseph Sturtevant
    One of the networks I administrate uses Windows Server 2003 File Shares to provide network storage for users. To prevent against accidental deletion, I use Shadow Copies to create snapshots twice a day. This method is only effective, however, for files which were on the share during the last snapshot. When users accidentally deleted files recently placed on the share, I have no recourse except to remote desktop into the server and attempt retrieval with an undelete utility (this is only effective if the file has not been overwritten). Is there a feature like the Windows Recycle Bin for Windows Server 2003 File Shares? What is the best way to protect my users against accidental file deletion in this scenario?

    Read the article

  • ArchBeat Link-o-Rama for October 29, 2013

    - by OTN ArchBeat
    Exceptions Handling and Notifications in ODI | Christophe Dupupet Oracle Fusion Middleware A-Team director Christophe Dupupet reviews the techniques that are available in Oracle Data Integrator to guarantee that the appropriate individuals are notified in the event that ODI processes are impacted by network outages or other mishaps. Tech Article: SOA in Real Life: Mobile Solutions The latest article in the Industrial SOA series looks at mobile computing and how companies are developing SOA to go. Oracle Coherence, Split-Brain and Recovery Protocols In Detail | Ricardo Ferreira Ricardo Ferreira's article "provides a high level conceptual overview of Split-Brain scenarios in distributed systems," focusins on a "specific example of cluster communication failure and recovery in Oracle Coherence." WebLogic & FMW Provisioning update | Edwin Biemond "Provisioning was a hot topic on Oracle Openworld 2013," says Oracle ACE Edwin Biemond. His latest blog post discusses what is now possible with WebLogic and Fusion Middleware, and looks at what might be possible in the future. Reusing and Extending ADF BC Entities from Common Model | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis' post is about "ADF architecture and better application structuring with EO reuse from a common model." Andrejus describes "how to implement additional requirements to common model in extended ADF BC Entities." Thought for the Day "I work hard, I work late, I have nothing on my conscience. When I go to bed, I sleep." — Ellen Johnson Sirleaf, 24th and current President of Liberia (Born 29 October 1938) Source: brainyquote.com

    Read the article

  • Preview Links and Images in Google Chrome

    - by Asian Angel
    Anyone who has used the CoolPreviews extension in Firefox knows how wonderful that preview window can be. Now you can get the same kind of functionality in Chrome with the ezLinkPreview extension. Note: Extension will not work on websites containing “frame buster” code (navigation to the actual URL will occur). Before Normally if you want to have a better look at a particular webpage the only option you have is to go ahead and open it in a new tab or window. But it would certainly be nice to be able to take a quick “sneak peek” before-hand… After As soon as you have finished installing the extension everything is ready to go…just refresh any pages open prior to installation and enjoy the preview goodness. When you hover your mouse near any link you will notice a small “Preview Button” appear with the letters “EZ” inside. A closer look at the “Preview Button”. Click on the “Preview Button” to open the popup window. Now you can get a very good idea of whether the page is worth visiting or not. Here is a closer look at the popup window. Notice that you can see the URL for the webpage and access a convenient set of buttons on the right side (Open to new tab, Pin to keep overlay open, and Close). You can even resize the window as desired to best suit your needs (you can actually grab any of the four corners to resize the popup window). It is also possible to open a “preview window” inside the popup window…you can see the “Preview Button” here… If you have Chrome maximized you can enjoy using a large sized “preview window”. Now that is nice! For those who may be curious you can see that ezLinkPreview works nicely with images too. Conclusion The ezLinkPreview extension provides a quick and simple way to preview links and/or images while you are browsing. If you are looking for similar functionality in Firefox then be sure to read our article on CoolPreviews here. Links Download the ezLinkPreview extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Google Image Search Quick FixSubscribe to RSS Feeds in Chrome with a Single ClickFind a Website’s Actual Location with Chrome FlagsHow to Make Google Chrome Your Default BrowserEnable Auto-Paging Goodness in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job?

    Read the article

  • Stop YouTube Videos from Automatically Playing in Chrome

    - by The Geek
    If you’ve actually used the internet before, you’ve probably come across a page with an auto-playing YouTube clip, and chances are good it was a rather annoying one. Here’s how to stop them from starting automatically in Chrome. We’ve already told you how to stop them from automatically playing if you’re a Firefox user (best answer: use Flashblock!), but now it’s time for Chrome users to get their turn. Use the Stop Autoplay for YouTube Extension The great thing about this extension is that it stops the video from playing, but it allows it to continue buffering, so when you do feel like playing the video, it’ll already be downloaded—really useful for people with slower internet connections. There’s no UI or anything fancy, just head to the extension page and click the Install button. If you want to get rid of it later, use the Tools –> Extensions menu (or you can type chrome://extensions/ into your address bar), and then click the Uninstall link for that add-on.   Download Stop Autoplay for YouTube [Google Chrome Extensions] Using FlashBlock for Chrome If you really wanted to, you could just disable Flash across the board using FlashBlock for Chrome. Once you’ve installed the extension, you won’t see any Flash elements anywhere, and you’ll have to move your mouse over them and click to enable them each time. When I installed the extension the first time, I noticed that YouTube was already in the allow list. I’m not sure if that’s the default setting or not, but you can use the icon in the address bar, or the Options from the Extensions panel to get to the settings page, and from there you can remove anything from the White List that you wouldn’t want. Another nice feature about Flash Block is that it can also block Silverlight, or you could simply uninstall or remove unnecessary Chrome plug-ins. Download FlashBlock for Chrome Similar Articles Productive Geek Tips Stop YouTube Videos from Automatically Playing in FirefoxDisable YouTube Comments while using ChromeApologies About An Awful Audio AdvertisementImprove YouTube Video Viewing in Google ChromeWatch YouTube Videos in Cinema Style in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • Home-Router: Access internal server using external ip [migrated]

    - by user15863
    If I've got a typical home router -- say a Net Gear -- which has certain ports forwarded to a internal server, is there a way to tweak the router to let me access that internal server using the external IP address from within the same network? Is there a non-enterprise grade router that can handle this type of thing? In case that was strangely worded, let me re-phrase with an example. My external IP is 1.2.3.4. My internal server is 10.4.3.100 Port 1178 is being forwarded from the router to 10.4.3.100. I'd like to be able to be able to hit 10.4.3.100 from an internal ip of 10.4.3.10 by using the external ip of 1.2.3.4. Possible?

    Read the article

  • GMail IMAP + Apple Mail / iPhone - "Account exceeded bandwidth limits. (Failure)"

    - by bpapa
    Started seeing this this morning in Apple Mail. I have one of those exclamation point error indicators next to "Inbox", with this error message when I click on it: There may be a problem with the mail server or network. Verify the settings for account “IMAP Account” or try again. The server returned the error: Account exceeded bandwidth limits. (Failure). This is in Snow Leopard. I'm using GMail IMAP, and I am way below the size quota - I've never heard of there even being a bandwidth quota. I'm also not getting mail from the same account to the mail app on my iPhone. EDIT - a month later I'm seeing this, and I'm thinking of just switching Mobile Me. EDIT AGAIN - Making community wiki. I stopped seeing the problem once I updated Snow Leopard to the latest version, but since others continue to see it...

    Read the article

  • ArchBeat Link-o-Rama for August 2, 2013

    - by OTN ArchBeat
    Podcast: Data Warehousing and Oracle Data Integrator - Part 2 Part to of the discussion about Data Warehousing and Oracle Data Integrator focuses on a discussion of how data warehousing is changing and the forces driving that change. Panelists for this discussion are Uli Bethke, Oracle ACE Director Cameron Lackpour, Oracle ACE Director (and guest producer) Gurcan Orhan, and Michael Rainey. Case Management In-Depth: Cases & Case Activities Part 1 – Acivity Scope | Mark Foster FMW solution architect Mark Foster kicks off a new series with a look at the decisions made on the scope of BPM process case activities. Video: Quick Intro to WebLogic Maven Plugin 12.1.2 | Mark Nelson This YouTube video by FMW solution architect Mark Nelson offers a quick introduction to the basics of installing and using the new Oracle WebLogic 12.1.2 Maven Plugin. Running the Managed Coherence Servers Example in WebLogic Server 12c | Tim Middleton FMW solution architect Tim Middleton shares the technical details on the new Managed Coherence Servers feature and outlines how you can run the sample application available with a WebLogic Server 12.1.2 install. What’s wrong with how we develop and deliver SOA Applications today? | Mark Nelson "When we arrive at the go-live day, we have a lot of fear and uncertainty," says solution architect Mark Nelson of the typical SOA practice. "We have no idea if the system is going to work in production. We have never tested it under a production-like load, and we have not really tested it for performance, longevity, etc." OTN Latin America Tour 2013 | Kai Yu Oracle ACE Director Kai Yu shares the session abstracts from his participation in the 2013 Oracle Technology Network Latin America conference tour, which made its way through OUG conferences in Ecuador, Guatemala, Panama, and Costa Rica. Webcast: Latest Security Innovations in Oracle Database 12c Oracle Database 12c includes more new security capabilities than any other release in Oracle history! In this webcast Roxana Bradescu (Director, Oracle Database Security Product Management) will discuss these capabilities and answer your questions. (Registration required.) Thought for the Day "The main goal in life career-wise should always be to try to get paid to simply be yourself." — Kevin Smith (Born August 2, 1970) Source: brainyquote.com

    Read the article

  • Can someone explain the "use-cases" for the default munin graphs?

    - by exhuma
    When installing munin, it activates a default set of plugins (at least on ubuntu). Alternatively, you can simply run munin-node-configure to figure out which plugins are supported on your system. Most of these plugins plot straight-forward data. My question is not to explain the nature of the data (well... maybe for some) but what is it that you look for in these graphs? It is easy to install munin and see fancy graphs. But having the graphs and not being able to "read" them renders them totally useless. I am going to list standard plugins which are enabled by default on my system. So it's going to be a long list. For completeness, I am also going to list plugins which I think to understand and give a short explanation as to what I think it's used for. Pleas correct if I am wrong with any of them. So let me split this questions in three parts: Plugins where I don't even understand the data Plugins where I understand the data but don't know what I should look out for Plugins which I think to understand Plugins where I don't even understand the data These may contain questions that are not necessarily aimed at munin alone. Not understanding the data usually mean a gap in fundamental knowledge on operating systems/hardware.... ;) Feel free to respond with a "giyf" answer. These are plugins where I can only guess what's going on... I hardly want to look at these "guessing"... Disk IOs per device (IOs/second)What's an IO. I know it stands for input/output. But that's as far as it goes. Disk latency per device (Average IO wait)Not a clue what an "IO wait" is... IO Service TimeThis one is a huge mess, and it's near impossible to see something in the graph at all. Plugins where I understand the data but don't know what I should look out for IOStat (blocks/second read/written)I assume, the thing to look out for in here are spikes? Which would mean that the device is in heavy use? Available entropy (bytes)I assume that this is important for random number generation? Why would I graph this? So far the value has always been near constant. VMStat (running/I/O sleep processes)What's the difference between this one and the "processes" graph? Both show running/sleeping processes, whereas the "Processes" graph seems to have more details. Disk throughput per device (bytes/second read/written) What's thedifference between this one and the "IOStat" graph? inode table usageWhat should I look for in this graph? Plugins which I think to understand I'll be guessing some things here... correct me if I am wrong. Disk usage in percent (percent)How much disk space is used/remaining. As this is approaching 100%, you should consider cleaning up or extend the partition. This is extremely important for the root partition. Firewall Throughput (packets/second)The number of packets passing through the firewall. If this is spiking for a longer period of time, it could be a sign of a DOS attack (or we are simply recieving a large file). It can also give you an idea about your firewall performance. If it's levelling out and you need more "power" you should consider load balancing. If it's levelling out and see a correlation with your CPU load, it could also mean that your hardware is not fast enough. Correlations with disk usage could point to excessive LOG targets in you FW config. eth0 errors (packets in/out)Network errors. If this value is increasing, it could be a sign of faulty hardware. eth0 traffic (bits/second in/out)Raw network traffic. This should correlate with Firewall throughput. number of threadsAn ever-increasing value might point to a process not properly closing threads. Investigate! processesBreakdown of active processes (including sleeping). A quick spike in here might point to a fork-bomb. A slowly, but ever-increasing value might point to an application spawning sub-processes but not properly closing them. Investigate using ps faux. process priorityThis shows the distribution of process priorities. Having only high-priority processes is not of much use. Consider de-prioritizing some. cpu usageFairly straight-forward. If this is spiking, you may have an attack going on, or a process is hogging the CPU. Idf it's slowly increasing and approaching max in normal operations, you should consider upgrading your hardware (or load-balancing). file table usageNumber of actively open files. If this is reaching max, you may have a process opening, but not properly releasing files. load averageShows an summarized value for the system load. Should correlate with CPU usage. Increasing values can come from a number of sources. Look for correlations with other graphs. memory usageA graphical representation of you memory. As long as you have a lot of unused+cache+buffers you are fine. swap in/outShows the activity on your swap partition. This should always be 0. If you see activity on this, you should add more memory to your machine!

    Read the article

  • Import LDIF file to external server

    - by colemanm
    As a follow-up to my previous question, which I've resolved part of, what we're trying to do now is take an exported .ldif file of the "Users" container on our OS X Server and import it into a separate OpenLDAP server on an EC2 instance. This we'll use for LDAP user authentication of other apps without having to open our internal network to LDAP traffic. The exported .ldif file thinks the DN of the "Users" container is cn=users,dc=server,dc=domain,dc=com. Is it easiest to configure the EC2 OpenLDAP server to think that it's domain is the same so the container is imported to the proper place? Or should we edit the text of the .ldif file to change the DN to match the external naming? Hopefully that makes sense... but I'm confused as to the best way to accomplish this.

    Read the article

  • Sonicwall - dual WAN ports - switch from one to another

    - by Charles
    Hi, Folks! I'm using a SonicWall NSA 240 which has two WAN ports (T1 and Comcast) and the LAN port has a cable which connects to a switch. From the switch, several cables connect to other switches. The SonicWall doesn't have DHCP enabled; one of our domain controllers running Windows Server 2003 also functions as a DHCP server. Is there a way for a user in our network to change connection from T1 to Comcast as their ISP or vice versa? In other words, if a user is connected via the T1, can he/she somehow connect via Comcast instead? Thanks, in advance, for your help! Sincerely, Charles

    Read the article

  • XNA Notes 001

    - by George Clingerman
    Just a quick recap of things I noticed going on in or around the XNA community this past week. I’m sure there’s a lot I missed (it’s a pretty big community with lots of different parts to it) but these where the things I caught that I thought were pretty cool. The XNA Team Michael Klucher gave a list of books every gamer should read. http://twitter.com/#!/mklucher/status/22313041135673344 Shawn Hargreaves posted Nelxon Studio posting about a cheatsheet for converting 3.1 to 4.0 http://blogs.msdn.com/b/shawnhar/archive/2011/01/04/xna-3-1-to-4-0-cheat-sheet.aspx?utm_source=twitterfeed&utm_medium=twitter XNA Game Studio won the Frontline award for Programming Tool by GameDev magazine! Congrats to the XNA team! http://www.gdmag.com/homepage.htm XNA MVPs In January several MVPs were up for re-election, Jim Perry, Andy ‘The ZMan’ Dunn, Glenn Wilson and myself were all re-award a Microsoft MVP award for their contributions to the XNA/DirectX communities. https://mvp.support.microsoft.com/communities/mvp.aspx?product=1&competency=XNA%2fDirectX A movement to get Michael McLaughlin an MVP award has started and you can join in too! http://twitter.com/#!/theBigDaddio/status/22744458621620224 http://www.xnadevelopment.com/MVP/MichaelMcLaughlinMVP.txt Don’t forget you can nominate ANYONE for a MVP award, that’s how they work. https://mvp.support.microsoft.com/gp/mvpbecoming  XNA Developers James Silva of Ska Studios hit 9,200 sales of ZP2KX and recommends you listen to Infected Mushroom. http://twitter.com/#!/Jamezila/status/22538865357094912 http://en.wikipedia.org/wiki/Infected_Mushroom Noogy creator of the upcoming XBLA title Dust an Elysian tail posts some details into his art creation. http://noogy.com/image/statue/statue.html Xbox LIVE Indie Game News Microsoft posts acknowledging there was an issue with the sales data that has been addressed and apologized for not posting about it sooner. http://forums.create.msdn.com/forums/p/71347/436154.aspx#436154 Winter Uprising sales still chugging along and being updated by Xalterax (by those developers willing to actually share sales numbers. Thanks for sharing guys, much appreciated!) http://forums.create.msdn.com/forums/t/70147.aspx Don’t forget about Dream Build Play coming up in February! http://www.dreambuildplay.com/Main/Home.aspx The Best Xbox LIVE Indie Games December Edition comes out on NeoGaf http://www.neogaf.com/forum/showthread.php?t=414485 The Greatest XBox LIVE Indie Games of 2010 on DealSpwn – Congrats to DrMistry and MStarGames for his #1 spot with his massive XBLIG Space Pirates From Tomorrow! http://www.dealspwn.com/xbligoty-2010/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Dealspwn+%28Dealspwn%29 XNA Game Development The future of XACT and WP7 has finally been confirmed and we finally know what our options are for looping audio seamlessly on WP7. http://forums.create.msdn.com/forums/p/61826/436639.aspx#436639  Super Mario 3 Design Notes is an interesting read for XBLIG developers, giving some insight to the training that natural occurs for players as they start playing the game. Good things for XBLIG developers to think about. http://www.significant-bits.com/super-mario-bros-3-level-design-lessons

    Read the article

  • How to maintain one file across many production servers (Windows and Linux)

    - by Brien
    My organization wants to centrally manage an Oracle TNSnames file for all of their production servers. When that file changes, they want to be able to push out the changes to all servers that use it with a minimal effort. Approaches that have been considered: Centralized file server (drawback: if the file server or the network connection to the file server goes down, the servers have no access to the critical file) Subversion client on each server (drawback: using a source control tool in production, added complexity) Store an individual copy of the file on each server (drawback: changing the file contents involves making changes on many different servers) Update Can I use DFS to do this?

    Read the article

  • Replacing a W2K3 Domain Controller - what do I need to know?

    - by Marko Carter
    I have a network of around 70 machines, currently with two DCs both running Windows Server 2003 (DC0 & DC1). DC0 is a five year old Poweredge 1850 and has recently become increasingly flakey, and in the past fortnight has fallen over twice. I want to replace this machine, but I'm cautious as there is huge scope for this sort of thing to go wrong. The way I imagine doing this is building a new machine then doing a DCPROMO and running three domain controllers for a month or so until I'm happy that everything is working as it should be before retiring the old machine. Particular areas of concern are the replication of roles from the current controllers (GP settings for instance) and the ramifications of switching off the machine that has, up until now, been the 'primary'. If there are compelling reasons to use Server 2008 I'm willing to do so, however I don't know if this would cause problems with my exisiting 2003 machines. Any advice on best practice or previous experiences would be most welcome.

    Read the article

  • Force local IP traffic to an external interface

    - by calandoa
    I have a machine with several interfaces that I can configure as I want, for instance: eth1: 192.168.1.1 eth2: 192.168.2.2 I would like to be able to forward all the traffic to one of these local address trhough the other interface. For instance, all requests to an iperf, ftp, http server at 192.168.1.1 are not just routed internally, but forwarded through eth2 (and the external network will take care of re-routing the packet to eth1). I tried and looked at several commands, like iptables, ip route, etc... but nothing worked. The closest behavior I could get was done with: ip route change to 192.168.1.1/24 dev eth2 which send all 192.168.1.x on eth2, except for 192.168.1.1 which is still routed internally. The goal of this setup is to do interface driver testing without using two PCs. I am using Linux, but if you know how to do that with Windows, I'll buy it!

    Read the article

  • Can't telnet to SQL Server

    - by Thiago
    Hi there, I have an SQL Server running on a computer, and I'm trying to access it from another computer in the same local network (potentially VPN, since it's located in a datacenter). The point is that I can't even telnet to the port in which SQL Server is listening. And yes, SQL Server is working, since I can telnet to it from my workstation. I think it's something in the host, since there's no hop between the two computers, but I don't know how to troubleshoot this. Basically I get a connection failed, when I try to telnet. What can cause such problem, since apparently there's no firewall and the server is accepting connections from other computers? Thanks in advance

    Read the article

  • Test ethernet port for data loss

    - by Manoj
    We are trying to test the ethernet phy on our linux box for data loss. As of now we just establish a tftp connection to a server to upload and download a file. Whenever a mismatch occurs, it is reported as failure. This is not a very nice test, as any mismatch might have been caused by the network itself and not a phy problem. Can you suggest a way to test the ethernet phy in a better way for data loss? Thanks...

    Read the article

  • How to forward external port to internal port using plink

    - by user857990
    For a penetration test where I have shell access to a computer running an old Windows, I'd like to forward port 4450 to 127.0.0.1:445 because the firewall is blocking 445 externally. I'm stuck on the following: plink -L 4450:127.0.0.1:445 SSH-Server According to the documentation I've found, I'd have to specify a SSH-Server. But all documentation I've found just uses an SSH-Server in the same network. To forward it to a localhost port, that wouldn't help. Do I have to install an SSH-Server on that machine or are there other ways?

    Read the article

  • Review: Backbone.js Testing

    - by george_v_reilly
    Title: Backbone.js Testing Author: Ryan Roemer Rating: $stars(4.5) Publisher: Packt Copyright: 2013 ISBN: 178216524X Pages: 168 Keywords: programming, testing, javascript, backbone, mocha, chai, sinon Reading period: October 2013 Backbone.js Testing is a short, dense introduction to testing JavaScript applications with three testing libraries, Mocha, Chai, and Sinon.JS. Although the author uses a sample application of a personal note manager written with Backbone.js throughout the book, much of the material would apply to any JavaScript client or server framework. Mocha is a test framework that can be executed in the browser or by Node.js, which runs your tests. Chai is a framework-agnostic TDD/BDD assertion library. Sinon.JS provides standalone test spies, stubs and mocks for JavaScript. They complement each other and the author does a good job of explaining when and how to use each. I've written a lot of tests in Python (unittest and mock, primarily) and C# (NUnit), but my experience with JavaScript unit testing was both limited and years out of date. The JavaScript ecosystem continues to evolve rapidly, with new browser frameworks and Node packages springing up everywhere. JavaScript has some particular challenges in testing—notably, asynchrony and callbacks. Mocha, Chai, and Sinon meet those challenges, though they can't take away all the pain. The author describes how to test Backbone models, views, and collections; dealing with asynchrony; provides useful testing heuristics, including isolating components to reduce dependencies; when to use stubs and mocks and fake servers; and test automation with PhantomJS. He does not, however, teach you Backbone.js itself; for that, you'll need another book. There are a few areas which I thought were dealt with too lightly. There's no real discussion of Test-driven_development or Behavior-driven_development, which provide the intellectual foundations of much of the book. Nor does he have much to say about testability and how to make legacy code more testable. The sample Notes app has plenty of testing seams (much of this falls naturally out of the architecture of Backbone); other apps are not so lucky. The chapter on automation is extremely terse—it could be expanded into a very large book!—but it does provide useful indicators to many areas for exploration. I learned a lot from this book and I have no hesitation in recommending it. Disclosure: Thanks to Ryan Roemer and Packt for a review copy of this book.

    Read the article

< Previous Page | 915 916 917 918 919 920 921 922 923 924 925 926  | Next Page >