Search Results

Search found 4625 results on 185 pages for 'split tunnel'.

Page 137/185 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Spotlight on an office - Moscow

    - by Maria Sandu
    Probably the most famous place in Moscow, after Red Square, is the centre of Moscow. Here you can find beautiful buildings that seem to touch the sky, located on the banks of the river. In one of these high towers you can find the Oracle offices, friendly and modern. The stunning view will keep capture your attention for a couple of minutes and then you can enjoy a delicious coffee and take a seat at your desk, starting a new day. My name is Dmitry and I can tell you that we’re enjoying every minute spent in the office and that’s because of the pleasant atmosphere. As soon as you enter the offices, the friendly environment will make you feel more relaxed. Even though the space is split between the different departments, we interact and communicate a lot. We take our cup of coffee or tea together and discuss our achievements and all sort of subjects in the kitchen or in the open space. One of my favorite parts are the festive events when we celebrate with cakes and goodies. Any birthday or new arrival is a good reason for a tea party! We have some work-related traditions that help us as employees. One of them is the monthly Tech Hour when Experts from the Pre-sales team discuss technical topics and about the most recent innovations within the company. Lunch is another good opportunity to interact and chat. We have a variety of options, such as the two kitchens or the vast number of restaurants where you can serve up anything you want. As we are right in the centre of Moscow, you can choose between Sushi, Italian Pasta and all sorts of food. We usually go with our colleagues to have lunch. If you care about your health, I have very good news for you as nearby there are two first-class fitness centres with swimming pools, yoga and various sport classes that you can attend. My suggestion would be to either start or end your day with a visit to the swimming pool for a well-deserved hour of relaxation. As I mentioned before, we’re right in the heart of Moscow, so after work you can spend some time in the large shopping centers where you can choose between many different entertainment options. We often go to bowling or to the cinema. I hope I have given you a glimpse into working life at the Oracle offices in Moscow, a really great and pleasant place to work in, so follow us on http://campus.oracle.com for our latest vacancies and internships.

    Read the article

  • Lazy Processing of Streams

    - by Giorgio
    I have the following problem scenario: I have a text file and I have to read it and split it into lines. Some lines might need to be dropped (according to criteria that are not fixed). The lines that are not dropped must be parsed into some predefined records. Records that are not valid must be dropped. Duplicate records may exist and, in such a case, they are consecutive. If duplicate / multiple records exist, only one item should be kept. The remaining records should be grouped according to the value contained in one field; all records belonging to the same group appear one after another (e.g. AAAABBBBCCDEEEFF and so on). The records of each group should be numbered (1, 2, 3, 4, ...). For each group the numbering starts from 1. The records must then be saved somewhere / consumed in the same order as they were produced. I have to implement this in Java or C++. My first idea was to define functions / methods like: One method to get all the lines from the file. One method to filter out the unwanted lines. One method to parse the filtered lines into valid records. One method to remove duplicate records. One method to group records and number them. The problem is that the data I am going to read can be too big and might not fit into main memory: so I cannot just construct all these lists and apply my functions one after the other. On the other hand, I think I do not need to fit all the data in main memory at once because once a record has been consumed all its underlying data (basically the lines of text between the previous record and the current record, and the record itself) can be disposed of. With the little knowledge I have of Haskell I have immediately thought about some kind of lazy evaluation, in which instead of applying functions to lists that have been completely computed, I have different streams of data that are built on top of each other and, at each moment, only the needed portion of each stream is materialized in main memory. But I have to implement this in Java or C++. So my question is which design pattern or other technique can allow me to implement this lazy processing of streams in one of these languages.

    Read the article

  • From 20,663 issues to 1 issue&ndash;style-copping C5.Tests

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/05/28/from-20663-issues-to-1-issuendashstyle-copping-c5.tests.aspxI recently became interested in the potential of the C5 Collections solution from http://www.itu.dk/research/c5/, however I was dismayed at the state of the code in the unit test project, so I set about fixing the 20,663 issues detected by StyleCop. The tools I used were the latest versions of: My 64-bit development PC running Windows 8 Update with 8Gb RAM Visual Studio 2013 Ultimate with SP2 ReSharper GhostDoc Pro My first attempt had to be abandoned due to collision of class names which broke one of the unit tests. So being aware of this duplication of class names, I started again and planned to prepend the class names with the namespace name. In some cases I additionally prepended the item of the C5 collection that was being tested. So what was the condition of code at the start? Besides the sprawl of C# code not written to style cop standard, there was: 1) Placing of many classes within one physical file. 2) Namespace within name space that did not follow the project structure. 3) As already mentioned, duplication of class names across namespaces. 4) A copyright notice that spawled but had to be preserved. 5) Project sub-folders were all lower case instead of initial letter capitalised. The first step was to add a stylecop heading plus the original heading contained within a region, to every file. The next step was to run GhostDoc Pro using its “Document File” option on every file but not letting it replace the headers, I had added. This brought the number of issues down to 18,192. I then went through each file collapsing each class and prepending names as appropriate. At each step, I saved the changes to my local Git. The step was to move each class to its own file and to style-cop each file. ReSharper provides a very useful feature for doing this which also fixes missing “this.” and moves using statements inside the namespace. Some classes required minimal work whereas others required extensive work to reach the stylecop standard. The unit tests were run at each split and when each class was completed. When all was done, one issue remained which I will need to submit to stylecop team for their advice (and possibly a fix to stylecop). The updated solution has been made available at https://c5stylecopped.codeplex.com/releases/view/122785.

    Read the article

  • Agile team with no dedicated Tester members. Insane or efficient?

    - by MetaFight
    I'm a software developer. I've been thinking a lot about the efficiency of the Software Testers I've worked with so far in my career. In fact, I've been thinking a lot about the Software Testers role in general and have reached a potentially contentious conclusion: Non-developer Software Testers staff are less efficient at software testing than developers. Now, before everyone gets upset, hear me out. This isn't mere opinion: Software Testing and Software Development both require a lot of skills in common: Problem solving Thinking about corner cases Analytical skills The ability to define clear and concise step-by-step scenarios What developers have in addition to this is the ability to automate their tests. Yes, I know non-dev testers can automate their tests too, but that often then becomes a test maintenance issue. Because automating UI tests is essentially programming, non-dev members encounter all the same difficulties software developers encounter: Copy-pasta, lack of code reusibility/maintainability, etc. So, I was wondering. Why not replace all non-dev roles with developer roles? Developers have the skills required to perform Software Testing tasks, and they have the skills to automate tests and keep them maintainable. Would the following work: Hire a bunch of developers and split them into 2 roles: Software developers Software developers doing testing (some manual, mostly automated by writing integration tests, unit tests, etc) Software developers doing application support. (I've removed this as it is probably a separate question altogether) And, in our case since we're doing Agile development, rotate the roles every sprint or two. Also, if at all possible, try to have people spend their Developer stints and Testing stints on different projects. Ideally you would want to reduce the turnover rate per rotation. So maybe you could have 2 groups and make sure the rotation cycles of the groups are elided. So, for example, if each rotation was two sprints long, the two groups would have their rotations 1 sprint apart. That way there's only a 50% turn-over rate per sprint. Am I crazy, or could this work? (Obviously a key component to this working is that all devs want to be in the 3 roles. Let's assume I'm starting a new company and I can hire these ideal people) Edit I've removed the phrase "QA", as apparently we are using it incorrectly where I work.

    Read the article

  • Dell VRTX - slow cluster shared storage

    - by NorbyTheGeek
    I have a brand new Dell VRTX box set up as a Failover Cluster running HA Hyper-V virtual machines. This is my first time setting up clustering, and my first time with one of these boxes, so I'm sure I've missed something. The virtual machines are experiencing high disk latency and bad performance when accessing their VHD(x) files located on a Cluster Shared Volume. The VRTX has 10 x 900 GB 10K SAS drives in RAID 6 configuration, and the VRTX has the redundant Shared PERC 8 controllers. Both blades have full access to the virtual disks. There are two M520 blades installed, each with 128 GB RAM. MPIO is configured for the PERC 8 controllers. Operating system on the blades is Server 2012 (NOT R2). The RAID 6 array is split into a small (8 GB) volume for cluster quorum witness and a large (6.5 TB) volume for a Cluster Shared Volume (mounted on the nodes as C:\ClusterStorage\Volume1) An example of slow disk access: logging into a Server 2012 VM and having Server Manager come up automatically. Disk access goes to 100%, with write speeds at 20 MB or so, read speeds of 500 KB or so, and Average Response Time of over 1000 ms, sometimes spiking at 4000-5000 ms or so. It's the latency that really worries me. Is there something specific I should look at in my configuration? It doesn't seem to matter whether I use VHD or VHDX, dynamic or static.

    Read the article

  • How do you install/configure JBoss on Linux/Unix?

    - by mafro
    I'm currently working on how install and configure multiple (30+) JBoss EAP 5 configurations (both standalone and clusters) for development, test and production at a client's site (running SuSE). I'm not to fancy about the jboss way of storing application/configuration together with system files, so I have tried to split things up (ie moving server config out of the jboss installation directory). I also would like minimize the amount of configuration needed when upgrading/patching jboss - but I'm not done thinking about that... It would be great to hear how you've done and what you think about my approach. This is how my installations look like (for the moment): Standard JBoss EAP install (minus server configs): /opt/jboss/jboss-eap-5.0/jboss-as /opt/jboss/jboss-eap-5.0/jboss-as/bin/ /opt/jboss/jboss-eap-5.0/jboss-as/lib/ /opt/jboss/jboss-eap-5.0/jboss-as/server/ [server configs removed to avoid starting them by mistake] /opt/jboss/jboss-eap-5.0/jboss-as/.../ Application (some jboss folders has been omitted - you'll get the point anyway): /app/<project>/ [$app.dir - application specific base folder] /app/<project>/jboss/ [$jboss.home] /app/<project>/jboss/bin/ -> /opt/jboss/jboss-eap-5.0/jboss-as/bin /app/<project>/jboss/lib/ -> /opt/jboss/jboss-eap-5.0/jboss-as/lib /app/<project>/jboss/server/<cfg>/ [project specific config based on 'production'] /app/<project>/jboss/server/<cfg>/log/ -> /log/<project>/<cfg> /app/<project>/jboss/server/<cfg>/... /app/<project>/jboss/.../ -> /opt/jboss/jboss-eap-5.0/jboss-as/.../ /app/<project>/bin/ [application specific scripts for start/stop etc - wraps jboss supplied scripts] /app/<project>/deploy/ [application deploy folder] /app/<project>/etc/ [application specific config] Questions: How do you install JBoss (on linux/unix systems)? Where do you put JBoss and what modifications do you do? Where do you put your applications and application specific files? Do you share JBoss instances between applications or run one instance/cluster per application? How do you manage configuration changes (i.e. your modifications of jboss standard config)?

    Read the article

  • Office365 SPF record has too many lookups

    - by Sammitch
    For some utterly ridiculous administrative reasons we've got a split domain with one mailbox on Office365 which requires us to add include:outlook.com to our SPF record. The problem with this is that that rule alone requires nine DNS lookups of the maximum of 10. Seriously, it's horrible. Just look at it: v=spf1 include:spf-a.outlook.com include:spf-b.outlook.com ip4:157.55.9.128/25 include:spfa.bigfish.com include:spfb.bigfish.com include:spfc.bigfish.com include:spf-a.hotmail.com include:_spf-ssg-b.microsoft.com include:_spf-ssg-c.microsoft.com ~all Given that we have our own large-ish mail system we need to have rules for a, mx, include:_spf1.mydomain.com, and include:_spf2.mydomain.com which puts us at 13 DNS lookups which causes PERMERRORs with strict SPF validators, and completely unreliable/unpredictable validation with non-strict/badly implemented validators. Is it possible to somehow eliminate 3 of those include: rules from the bloated outlook.com record, but still cover the servers used by O365? Edit: Commentors have mentioned that we should simply use the shorter spf.protection.outlook.com record. While that is news to me, and it is shorter, it's only one record shorter: spf.protection.outlook.com include:spf-a.outlook.com include:spf-b.outlook.com include:spf-c.outlook.com include:spf.messaging.microsoft.com include:spfa.frontbridge.com include:spfb.frontbridge.com include:spfc.frontbridge.com Edit² I suppose we can technically pare this down to: v=spf1 a mx include:_spf1.mydomain.com include:_spf2.mydomain.com include:spf-a.outlook.com include:spf-b.outlook.com include:spf-c.outlook.com include:spfa.frontbridge.com include:spfb.frontbridge.com include:spfc.frontbridge.com ~all but the potential issues I see with this are: We need to keep abreast of any changes to the parent spf.protection.outlook.com and spf.messaging.microsoft.com records. If anything is changed or [god forbid] added we would have to manually update ours to reflect that. With our actual domain name the record's length is 260 chars, which would require 2 strings for the TXT record, and I honestly don't trust that all of the DNS clients and SPF resolvers out there will properly accept a TXT record longer than 255 bytes.

    Read the article

  • How to set up the CNAME in DNS zone record to work with Unbounce

    - by Lirik
    I'm trying to run split testing on some landing pages I "designed" with Unbounce, but it requires that I set the CNAME record for my domain/sub-domain and I'm having trouble figuring out what is the right way to do it. My host is arvixe (www.arvixe.com) and their customer support has failed to help me for the past 5 days (I spoke to them multiple times). I followed the directions for setting the CNAME record and I was able to set the CNAME record, but I'm consistently unable to verify that the CNAME record is set up correctly. I followed the instructions on Unbounce to verify the CNAME record for my sub-domain (beta.devboost.com) and here are the results: No records found reverse lookup smtp diag port scan blacklist Reported by ns1.SNARE.arvixe.com on Thursday, November 10, 2011 at 5:49:57 PM (GMT-6) Here is my DNS zone record from the control panel of my host (last record, CNAME unbouncepages.com): Is there something wrong with my DNS Zone Record? What's the right way to do this? Update: I also have a CNAME record for beta in my root domain (devboost.com): I've updated my sub-domain record now: I've removed most of the other DNS records and I've removed the beta label for the CNAME record: Is that correct? Is there anything else I need to do?

    Read the article

  • How to shrink Windows 7 boot partition with unmovable files.

    - by Alex Che
    I have just bought HP laptop with Windows 7 (64 bit). It has 500 GB HDD with three partitions: small hidden system partition, 12 GiB HP recovery partition, and 450 GiB C: boot partition. I would like to split this large C: partition into two partitions, leaving only 100 GiB for system, and giving the rest to new data partition. Although Windows built-in Disk Management utility has an option to shrink the bootable partition, it only allows me to shrink it roughly by half, even though only 20 GiB on the partition is used. As far as I understand, system unmovable files lie in the middle of the partition, preventing Disk Management utility to do what I want. And since new HP laptops don't come with OS installation disks (they only allow you to create recovery disks youself), I can't just repartition HDD and then reinstall OS. So, is there any way to shrink C: bootable partition and preserve Windows 7 working? P.S.: I have tried to use 3rd party GParted utility, and after shrinking the partition Windows 7 stopped booting with BSOD. System recovery didn't work, and I had to do factory recover. Since this is a long process, I would like to avoid doing it again :) So, please, suggest only proven solutions.

    Read the article

  • How to get robocopy running in powershell?

    - by Mark Allison
    Hi, I'm trying to use robocopy inside powershell to mirror some directories on my home machines. Here's my script: param ($configFile) $config = Import-Csv $configFile $what = "/COPYALL /B /SEC/ /MIR" $options = "/R:0 /W:0 /NFL /NDL" $logDir = "C:\Backup\" foreach ($line in $config) { $source = $($line.SourceFolder) $dest = $($line.DestFolder) $logfile = $logDIr $logfile += Split-Path $dest -Leaf $logfile += ".log" robocopy "$source $dest $what $options /LOG:MyLogfile.txt" } The script takes in a csv file with a list of source and destination directories. When I run the script I get these errors: ------------------------------------------------------------------------------- ROBOCOPY :: Robust File Copy for Windows ------------------------------------------------------------------------------- Started : Sat Apr 03 21:26:57 2010 Source : P:\ C:\Backup\Photos \COPYALL \B \SEC\ \MIR \R:0 \W:0 \NFL \NDL \LOG:MyLogfile.txt\ Dest - Files : *.* Options : *.* /COPY:DAT /R:1000000 /W:30 ------------------------------------------------------------------------------ ERROR : No Destination Directory Specified. Simple Usage :: ROBOCOPY source destination /MIR source :: Source Directory (drive:\path or \\server\share\path). destination :: Destination Dir (drive:\path or \\server\share\path). /MIR :: Mirror a complete directory tree. For more usage information run ROBOCOPY /? **** /MIR can DELETE files as well as copy them ! Any idea what I need to do to fix? Thanks, Mark.

    Read the article

  • Useful Excel keyboard shortcuts

    - by Ben Lings
    What keyboard shortcuts do you use in Excel? Things I've discovered recently and found very useful are: Shift + Space - select the current row Ctrl + Space - select the current column Ctrl + Shift + Space - select the block of contiguous cell Ctrl + + - Insert (as in the context menu). If the current row is selected, will insert a new row. Ctrl + - - Delete (as in the context menu). If the current row is selected, will delete entire row. What (apart from the normal cut, copy, paste, etc) do you use? Ctrl + 1 to open the Format dialog. Shift + F2 to add/edit a cell comment. Shift + F2, followed by Esc to select the current cell comment, which can then be moved around with the arrow keys (????) or deleted by pressing Del. Ctrl + ???? to move to the last non-blank cell in a series. This is usually the edge of a table, but not if you have blank cells in the path. Pressing End followed by an arrow key does the same thing. Alt + F11 to open the VBA editor. Alt + = to start a SUM() formula and go straight to selecting cells to be summed. Ctrl + G or F5 to jump to a cell by typing its coordinates (e.g. C3) Ctrl + Home to jump to the top left, usually A1 unless you you are in a frozen split view, in which case it will jump to the top left of the "data" area. Ctrl + ; and Ctrl + Shift + ; to insert the current date and time, respectively. I know Ben Lings already posted this one, but I find it indispensable.

    Read the article

  • Locating Rogue Perl Script

    - by Gary Garside
    I've been trying to source the location of a perl script which is causing havoc on a server which i control. I'm also trying to find out exactly how this script was installed on the server - my best guess is through a wordpress exploit. The server is a basic web setup running Ubuntu 9.04, Apache and MySQL. I use IPTables for firewall, the site runs around 20 sites and the load never really creeps above 0.7. From what i can see the script is making outbound connection to other servers (most likely trying to brute force entry). Here is a top dump of one of the processes: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22569 www-data 20 0 22784 3216 780 R 100 0.2 47:00.60 perl The command the process is running is /usr/sbin/sshd . I've tried to find an exact file name but im having no luck... i've ran a lsof -p PID and here is the output: COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME perl 22569 www-data cwd DIR 8,6 4096 2 / perl 22569 www-data rtd DIR 8,6 4096 2 / perl 22569 www-data txt REG 8,6 10336 162220 /usr/bin/perl perl 22569 www-data mem REG 8,6 26936 170219 /usr/lib/perl/5.10.0/auto/Socket/Socket.so perl 22569 www-data mem REG 8,6 22808 170214 /usr/lib/perl/5.10.0/auto/IO/IO.so perl 22569 www-data mem REG 8,6 39112 145112 /lib/libcrypt-2.9.so perl 22569 www-data mem REG 8,6 1502512 145124 /lib/libc-2.9.so perl 22569 www-data mem REG 8,6 130151 145113 /lib/libpthread-2.9.so perl 22569 www-data mem REG 8,6 542928 145122 /lib/libm-2.9.so perl 22569 www-data mem REG 8,6 14608 145125 /lib/libdl-2.9.so perl 22569 www-data mem REG 8,6 1503704 162222 /usr/lib/libperl.so.5.10.0 perl 22569 www-data mem REG 8,6 135680 145116 /lib/ld-2.9.so perl 22569 www-data 0r FIFO 0,6 157216 pipe perl 22569 www-data 1w FIFO 0,6 197642 pipe perl 22569 www-data 2w FIFO 0,6 197642 pipe perl 22569 www-data 3w FIFO 0,6 197642 pipe perl 22569 www-data 4u IPv4 383991 TCP outsidesoftware.com:56869->server12.34.56.78.live-servers.net:www (ESTABLISHED) My gut feeling is outsidesoftware.com is also under attacK? Or possibly being used as a tunnel. I've managed to find a number of rouge files in /tmp and /var/tmp, here is a brief output of one of these files: #!/usr/bin/perl # this spreader is coded by xdh # xdh@xxxxxxxxxxx # only for testing... my @nickname = ("vn"); my $nick = $nickname[rand scalar @nickname]; my $ircname = $nickname[rand scalar @nickname]; #system("kill -9 `ps ax |grep httpdse |grep -v grep|awk '{print $1;}'`"); my $processo = '/usr/sbin/sshd'; The full file contents can be viewed here: http://pastebin.com/yenFRrGP Im trying to achieve a couple of things here... Firstly i need to stop these processes from running. Either by disabling outbound SSH or any IP Tables rules etc... these scripts have been running for around 36 hours now and my main concern is to stop these things running and respawning by themselves. Secondly i need to try and source where and how these scripts have been installed. If anybody has any advise on what to look for in access logs or anything else i would be really grateful. Thanks in advance

    Read the article

  • Cisco VPN Client dropping connection

    - by IT Team
    Using Windows XP and Cisco VPN client version 5.0.4.xxx to connect to a remote customer site. We are able to establish the connection and start an RDP session, but within 1-2 minutes the connection drops and the VPN connection disconnects. The PC making the connection is on a DMZ which is NATed to a public IP address. If we move the PC directly onto the internet without being on the DMZ the connection works and we don't encounter any disconnects. We use a PIX 515E running 7.2.4 and don't have any problems with similar setups connecting to other customer sites from the DMZ. The VPN setup on the client side is pretty basic, using IPSec over TCP port 10000. Not sure what device they are using on the peer, but my guess would be an ASA. Any idea as to what the problem would be? Below is the logs from the VPN client when the problem occurs. The real IP address has been changed to: RemotePeerIP. 4 14:39:30.593 09/23/09 Sev=Info/4 CM/0x63100024 Attempt connection with server "RemotePeerIP" 5 14:39:30.593 09/23/09 Sev=Info/6 CM/0x6310002F Allocated local TCP port 1942 for TCP connection. 6 14:39:30.796 09/23/09 Sev=Info/4 IPSEC/0x63700008 IPSec driver successfully started 7 14:39:30.796 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 8 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x6370002C Sent 256 packets, 0 were fragmented. 9 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x63700020 TCP SYN sent to RemotePeerIP, src port 1942, dst port 10000 10 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x6370001C TCP SYN-ACK received from RemotePeerIP, src port 10000, dst port 1942 11 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x63700021 TCP ACK sent to RemotePeerIP, src port 1942, dst port 10000 12 14:39:30.796 09/23/09 Sev=Warning/3 IPSEC/0xA370001C Bad cTCP trailer, Rsvd 26984, Magic# 63697672h, trailer len 101, MajorVer 13, MinorVer 10 13 14:39:30.796 09/23/09 Sev=Info/4 CM/0x63100029 TCP connection established on port 10000 with server "RemotePeerIP" 14 14:39:31.296 09/23/09 Sev=Info/4 CM/0x63100024 Attempt connection with server "RemotePeerIP" 15 14:39:31.296 09/23/09 Sev=Info/6 IKE/0x6300003B Attempting to establish a connection with RemotePeerIP. 16 14:39:31.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (SA, KE, NON, ID, VID(Xauth), VID(dpd), VID(Frag), VID(Unity)) to RemotePeerIP 17 14:39:36.296 09/23/09 Sev=Info/4 IKE/0x63000021 Retransmitting last packet! 18 14:39:36.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (Retransmission) to RemotePeerIP 19 14:39:41.296 09/23/09 Sev=Info/4 IKE/0x63000021 Retransmitting last packet! 20 14:39:41.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (Retransmission) to RemotePeerIP 21 14:39:46.296 09/23/09 Sev=Info/4 IKE/0x63000021 Retransmitting last packet! 22 14:39:46.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (Retransmission) to RemotePeerIP 23 14:39:51.328 09/23/09 Sev=Info/4 IKE/0x63000017 Marking IKE SA for deletion (I_Cookie=AEFC3FFF0405BBD6 R_Cookie=0000000000000000) reason = DEL_REASON_PEER_NOT_RESPONDING 24 14:39:51.828 09/23/09 Sev=Info/4 IKE/0x6300004B Discarding IKE SA negotiation (I_Cookie=AEFC3FFF0405BBD6 R_Cookie=0000000000000000) reason = DEL_REASON_PEER_NOT_RESPONDING 25 14:39:51.828 09/23/09 Sev=Info/4 CM/0x63100014 Unable to establish Phase 1 SA with server "RemotePeerIP" because of "DEL_REASON_PEER_NOT_RESPONDING" 26 14:39:51.828 09/23/09 Sev=Info/5 CM/0x63100025 Initializing CVPNDrv 27 14:39:51.828 09/23/09 Sev=Info/4 CM/0x6310002D Resetting TCP connection on port 10000 28 14:39:51.828 09/23/09 Sev=Info/6 CM/0x63100030 Removed local TCP port 1942 for TCP connection. 29 14:39:51.828 09/23/09 Sev=Info/6 CM/0x63100046 Set tunnel established flag in registry to 0. 30 14:39:51.828 09/23/09 Sev=Info/4 IKE/0x63000001 IKE received signal to terminate VPN connection 31 14:39:52.328 09/23/09 Sev=Info/6 IPSEC/0x63700023 TCP RST sent to RemotePeerIP, src port 1942, dst port 10000 32 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 33 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 34 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 35 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x6370000A IPSec driver successfully stopped Thank you for any help you can provide.

    Read the article

  • Wacom consumer tablet driver service may crash while opening Bamboo Preferences, often after resuming computer from sleep

    - by DragonLord
    One of the ExpressKeys on my Wacom Bamboo Capture graphics tablet is mapped to Bamboo Preferences, so that I can quickly access the tablet settings and view the battery level (I have the Wireless Accessory Kit installed). However, when I connect the tablet to the computer, in wired or wireless mode, and attempt to open Bamboo Preferences, the Wacom consumer tablet driver service may crash, most often when I try to do so after resuming the computer from sleep. There is usually no direct indication of the crash (although I once did get Tablet Service for consumer driver stopped working and was closed), only that the cursor shows that the system is busy for a split second. When this happens, the pen no longer tracks on the screen when in proximity of the tablet (even though it is detected by the tablet itself); however, touch continues to function correctly. To recover from this condition, I need to restart the tablet driver services. I got tired of having to go through Task Manager to restart the service every time this happens, so I ended up writing the following command script, with a shortcut on the desktop for running it with elevated privileges: net stop TabletServicePen net start TabletServicePen net stop TouchServicePen net start TouchServicePen Is there something I can do to prevent these crashes from happening in the first place, or do I have have to deal with this issue until the driver is updated? Windows 7 Home Premium 64-bit. Tablet drivers are up to date. Technical details Action Center gives the following details about the crash in Reliability Monitor: Source Tablet Service for consumer driver Summary Stopped working Date ?10/?15/?2012 2:48 PM Status Report sent Description Faulting Application Path: C:\Program Files\Tablet\Pen\Pen_Tablet.exe Problem signature Problem Event Name: APPCRASH Application Name: Pen_Tablet.exe Application Version: 5.2.5.5 Application Timestamp: 4e694ecd Fault Module Name: Pen_Tablet.exe Fault Module Version: 5.2.5.5 Fault Module Timestamp: 4e694ecd Exception Code: c0000005 Exception Offset: 00000000002f6cde OS Version: 6.1.7601.2.1.0.768.3 Locale ID: 1033 Additional Information 1: 9d4f Additional Information 2: 9d4f1c8d2c16a5d47e28521ff719cfba Additional Information 3: 375e Additional Information 4: 375ebb9963823eb7e450696f2abb66cc Extra information about the problem Bucket ID: 45598085 Exception code 0xC0000005 means STATUS_ACCESS_VIOLATION. The event log contains essentially the same information.

    Read the article

  • Windows 7 - Windows Update won't update

    - by StickFigs
    I'm trying to update my Windows 7 Professional 32-bit edition and when I try to tell Windows Update to scan for updates it failed with the error code 0x80096001. I checked out WindowsUpdate.log and it appears this is the problem: Validating signature for C:\Windows\SoftwareDistribution\WuRedir\9482F4B4-E343-43B6-B170-9A65BC822C77\muv4wuredir.cab: WARNING: Error: 0x80096001 when verifying trust for C:\Windows\SoftwareDistribution\WuRedir\9482F4B4-E343-43B6-B170-9A65BC822C77\muv4wuredir.cab WARNING: Digital Signatures on file C:\Windows\SoftwareDistribution\WuRedir\9482F4B4-E343-43B6-B170-9A65BC822C77\muv4wuredir.cab are not trusted: Error 0x80096001 How can I go about fixing this? It looks like it's just this one (corrupted?) file that's causing the problem. Thanks! UPDATE: Upon inspecting the file mentioned in the error message it appears that the file does not exist! What does this mean and how do I get it back? UPDATE 2: Ok it appears that the file in question appears only for a split second when Windows Updating is trying to search (but fails) to find updates. So I guess the problem doesn't have to do with the file specifically then.

    Read the article

  • What device can create wireless network while connecting to an ethernet router

    - by Nicolo
    Hi, I have access to an ethernet port of a wireless router. I simply connect my laptop to it via an ethernet cable. There are a total of four such ports on the wireless router. Now I want to connect a device (a wireless access point? wireless bridge? wireless switch?) via an ethernet cable to one of the other ethernet ports of the router. I want this device to act as a kind of wireless switch - it should "split" the ethernet connection coming from the router to two or more computers that connect to this device via a wireless. Basically, I have a wireless router with its wireless function switched off. I don't know the password for that router so can't activate the wireless function. Don't know the password of the ISP either. The only thing I can do is to connect via ethernet cable to the wireless router and this does not require a password. Now I want to use that connection and build a wireless upon it. What kind of device do I need? I am not really very well informed about network management and find the descriptions "wireless access point", "wireless bridge", "wireless switch" confusing. I know what an ethernet switch is - what I need is a device which would do the same but by allowing the clients to connect to it via a wireless. What kind of device would do that? Any recommendations about specific products?

    Read the article

  • Chunking large rsync transfers?

    - by Gabe Martin-Dempesy
    We use rsync to update a mirror of our primary file server to an off-site colocated backup server. One of the issues we currently have is that our file server has 1TB of mostly smaller files (in the 10-100kb range), and when we're transferring this much data, we often end up with the connection being dropped several hours into the transfer. Rsync doesn't have a resume/retry feature that simply reconnects to the server to pickup where it left off -- you need to go through the file comparison process, which ends up being very length with the amount of files we have. The solution that's recommended to get around is to split up your large rsync transfer into a series of smaller transfers. I've figured the best way to do this is by first letter of the top-level directory names, which doesn't give us a perfectly even distribution, but is good enough. I'd like to confirm if my methodology for doing this is sane, or if there's a more simple way to accomplish the goal. To do this, I iterate through A-Z, a-z, 0-9 to pick a one character $prefix. Initially I was thinking of just running rsync -av --delete --delete-excluded --exclude "*.mp3" "src/$prefix*" dest/ (--exclude "*.mp3" is just an example, as we have a more lengthy exclude list for removing things like temporary files) The problem with this is that any top-level directories in dest/ that are no longer present present on src will not get picked up by --delete. To get around this, I'm instead trying the following: rsync \ --filter 'S /$prefix*' \ --filter 'R /$prefix*' \ --filter 'H /*' \ --filter 'P /*' \ -av --delete --delete-excluded --exclude "*.mp3" src/ dest/ I'm using the show and hide over include and exclude, because otherwise the --delete-excluded will delete anything that doesn't match $prefix. Is this the most effective way of splitting the rsync into smaller chunks? Is there a more effective tool, or a flag that I've missed, that might make this more simple?

    Read the article

  • Excel VBA: select every other cell in a row range to be copied and pasted vertically

    - by terry alexander
    i have a 2200+ page text file. It is delivered from a customer through a data exchange to us with astericks to separate values and tildes (~) to denote the end of a row. The file is sent to me as a text file in Word. Most rows are split in two (1 row covers a full line and part of a second line). i transfer segments (10 page chunks) of it at a time into Excel where, unfortunately, any zeroes that occur at the end of a row get discarded in the "text to columns" procedure. So, i eyeball every "long" row to insure that zeroes were not lost and manually re-enter any that were. Here is a small bit of sample data: SDQ EA 92 1551 378 1601 151 1603 157 1604 83 The "SDQ, EA, and 92" are irrelevant (artifacts of data transmission). i want to use Excel VBA to select 1551, 1601, 1603, and 1604 (these are store numbers) so that i can copy those values and transpose paste them vertically. i will then go back and copy 378, 151, 157, and 83 (sales values) so that i can transpose paste them next to the store numbers. The next two rows of data contain the same store numbers but give the corresponding dollar values. i will only need to copy the dollar values so they can be transpose pasted vertically next to unit values (e.g. 378, 151, 157, and 83). Just being able to put my cursor on the first cell of interest in the row and run a macro to copy every other cell would speed up my work tremendously. i have tried using activecell and offset references to select a range to copy but have not been successful. Does any have any suggestions for me? Thanks in advance for the help.

    Read the article

  • ADSL throughput loss from Reed-Solomon encoding

    - by javano
    I'm reading about ADSL starting here and I am confused by how the Reed-Solomon encoding for ECC is limiting the available transfer rate, as much as it does (nearly half). This pdf on the same subject contains the following; A maximum of 255 sub-carriers can be used to modulate data in the downstream direction. Sub-carrier 256, the downstream Nyquist frequency, and sub-carrier 64, the downstream pilot frequency, are not available for user data, thus limiting the total number of available downstream sub-carriers to 254. Each of these 254 sub-carriers can support the modulation of 0 to 15 bits. Since the ADSL DMT data frame rate is 4000 frames per second, the maximum theoretical downstream data rate of an ADSL system is 15.24Mbps. Due to limitations in system architecture, specifically the maximum allowable Reed-Solomon codeword size (255 bytes), the maximum achievable downstream data rate is 8.16Mbps. How is this nearly halving the throughput? Is all that extra bandwidth overhead of the RS encoding? 15240000 bps (15.24Mbps) - 8160000 bps (8.12Mbps) = 7080000 bps (7.08Mbps). Where has that 7Mbps of throughput gone? EDIT: I tried to read the wiki page on Reed-Soloman but it's all crazy maths and algerbra, which I don't understand. I can understand that data is split into 255 byte codewords, because that maybe the max codeword size whilst still maintaining accuracy during transmission; But I don't understand why that means less data is sent?

    Read the article

  • Is there a way to tell what the download speed is from a site/server

    - by Memor-X
    i'm looking into which ISP i should go with at the place i'm moving into, one ISP which i have been told good things about has data limits (which when breached will drop your speed to dial-up speed) but multiple memberships which, apart from the cheapest membership, have the same data limits (the cheapest has a 10GB data limit) in their fine print, they say that each different membership has different port speeds, one particular part jumps out at me These speeds are the NBN (National Broadband Network) port speed and not the actual Internet data speed which will vary based on numerous factors including destination you are reaching, your network equipment, network congestion etc. i plan to use the net to download DLC and patch updates for games (particular the insanely large update for the Wii U) and games from Steam (if i find any good one other than this one JRPG) and downloading development resources from free sites like Deposit Files and Mediafire since one membership with a 1000GB data limit is $145 with the port speed being 12Mbps/1Mbps (cheapest) while another with the same limit is $190 with the port speed of 100Mbps/40Mbps (expensive) i am wondering how i can tell what the speed coming from site is since i don't want to be wasting money on speed that makes no difference (unlike memory which i rather have to spare) NOTE: the speeds are for a fiber optic network which where my new place is can only connect via fixed wireless which i may not be able to get with this ISP but if i can get this network then good NOTE 2: most of the resources i get from Deposit Files are always about 200 MB or less, if a resource pack is greater then it's split into multiple archives (like .7z.part) while Mediafire i have to see one bigger than 150MB NOTE 3: one update patch for a PS3 game is close to 4 GB (Disgaea 4) which i need to get access to the DLC and on the weekend i downloaded 5 GB for the Final Fantasy XIV Open Beta for the PS3 which took almost 5 hours

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • How does Subnetting Work?

    - by Kyle Brandt
    How does Subnetting Work, and How do you do it by hand or in your head? Can someone explain both conceptually and with several examples? Server Fault gets lots of subnetting homework questions, so we could use an answer to point them to on Server Fault itself. What is classless routing and why is class-based routing obsolete? If I have a network, how do I figure out how to split it up? If I am given a netmask, how do I know what the network Range is for it? Sometimes there is a slash followed by a number, what is that number? Sometimes there is a subnet mask, but also a wildcard mask, they seem like the same thing but they are different? Someone mentioned something about knowing binary for this? Not looking for links to other sites (unless maybe you have one post with a bunch of good ones). I already know how to subnet, I just thought it would be nice if Server Fault had a generic subnetting answer.

    Read the article

  • email output of powershell script

    - by Gordon Carlisle
    I found this wonderful script that outputs the status of the current DFS backlog to the powershell console. This works great, but I need the script to email me so I can schedule it to run nightly. I have tried using the Send-MailMessage command, but can't get it to work. Mainly because my powershell skills are very weak. I believe most of the issue revolve around the script using the Write-Host command. While the coloring is nice I would much rather have it email me the results. I also need the solution to be able to specify a mail server since the dfs servers don't have email capability. Any help or tips are welcome and appreciated. Here is the code. $RGroups = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query "SELECT * FROM DfsrReplicationGroupConfig" $ComputerName=$env:ComputerName $Succ=0 $Warn=0 $Err=0 foreach ($Group in $RGroups) { $RGFoldersWMIQ = "SELECT * FROM DfsrReplicatedFolderConfig WHERE ReplicationGroupGUID='" + $Group.ReplicationGroupGUID + "'" $RGFolders = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGFoldersWMIQ $RGConnectionsWMIQ = "SELECT * FROM DfsrConnectionConfig WHERE ReplicationGroupGUID='"+ $Group.ReplicationGroupGUID + "'" $RGConnections = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGConnectionsWMIQ foreach ($Connection in $RGConnections) { $ConnectionName = $Connection.PartnerName.Trim() if ($Connection.Enabled -eq $True) { if (((New-Object System.Net.NetworkInformation.ping).send("$ConnectionName")).Status -eq "Success") { foreach ($Folder in $RGFolders) { $RGName = $Group.ReplicationGroupName $RFName = $Folder.ReplicatedFolderName if ($Connection.Inbound -eq $True) { $SendingMember = $ConnectionName $ReceivingMember = $ComputerName $Direction="inbound" } else { $SendingMember = $ComputerName $ReceivingMember = $ConnectionName $Direction="outbound" } $BLCommand = "dfsrdiag Backlog /RGName:'" + $RGName + "' /RFName:'" + $RFName + "' /SendingMember:" + $SendingMember + " /ReceivingMember:" + $ReceivingMember $Backlog = Invoke-Expression -Command $BLCommand $BackLogFilecount = 0 foreach ($item in $Backlog) { if ($item -ilike "*Backlog File count*") { $BacklogFileCount = [int]$Item.Split(":")[1].Trim() } } if ($BacklogFileCount -eq 0) { $Color="white" $Succ=$Succ+1 } elseif ($BacklogFilecount -lt 10) { $Color="yellow" $Warn=$Warn+1 } else { $Color="red" $Err=$Err+1 } Write-Host "$BacklogFileCount files in backlog $SendingMember->$ReceivingMember for $RGName" -fore $Color } # Closing iterate through all folders } # Closing If replies to ping } # Closing If Connection enabled } # Closing iteration through all connections } # Closing iteration through all groups Write-Host "$Succ successful, $Warn warnings and $Err errors from $($Succ+$Warn+$Err) replications." Thanks, Gordon

    Read the article

  • HLS video segmenting complications. How to create a transport stream with ffmpeg

    - by Agzam
    I have h264 videos, and currently we're using Apple's HTTP Video Streaming tools and mediafilesegmenter to segment these files. What I need to do is to switch to alternative segmenter based on this very popular open-sourced segmenter The problem is that this segmenter does not just take any video, but takes only MPEG-TS videos. So I have to convert my h264 videos to TS first. I can do that with ffmpeg. I'm using this: ffmpeg -i encoded.mp4 -vcodec h264 -i encoded.mp4 -sameq -acodec aac -strict experimental -f mpegts output.ts But this creates fairly larger output. And the reason is that Apple's segmenter keeps the same codec - AVC and the same audio codec - AAC, whereas ffmpeg changes video format to MPEG Video. The question is: can I somehow keep the same AVC video codec and still convert video to a transport stream? So my goal is to keep the same video quality and same video codecs as Apple's medifilesegmenter does. UPD: Okay... it seems that ffmpeg CAN split videos into segments: ffmpeg -i encoded.mp4 -c copy -map 0 -vbsf h264_mp4toannexb -f segment -segment_time 10 -segment_list test.m3u8 -segment_format mpegts segment%d.ts That's still has one problem: it doesn't create http live streaming index file. (-segment_list creates a file with list of segments, but it doesn't look like HLS index). So, you still have to create index file

    Read the article

  • Create new folder for new sender name and move message into new folder

    - by Dave Jarvis
    Background I'd like to have Outlook 2010 automatically move e-mails into folders designated by the person's name. For example: Click Rules Click Manage Rules & Alerts Click New Rule Select "Move messages from someone to a folder" Click Next The following dialog is shown: Problem The next part usually looks as follows: Click people or public group Select the desired person Click specified Select the desired folder Question How would you automate those problematic manual tasks? Here's the logic for the new rule I'd like to create: Receive a new message. Extract the name of the sender. If it does not exist, create a new folder under Inbox Move the new message into the folder assigned to that person's name I think this will require a VBA macro. Related Links http://www.experts-exchange.com/Software/Office_Productivity/Groupware/Outlook/A_420-Extending-Outlook-Rules-via-Scripting.html http://msdn.microsoft.com/en-us/library/office/ee814735.aspx http://msdn.microsoft.com/en-us/library/office/ee814736.aspx http://stackoverflow.com/questions/11263483/how-do-i-trigger-a-macro-to-run-after-a-new-mail-is-received-in-outlook http://en.kioskea.net/faq/6174-outlook-a-macro-to-create-folders http://blogs.iis.net/robert_mcmurray/archive/2010/02/25/outlook-macros-part-1-moving-emails-into-personal-folders.aspx Update #1 The code might resemble something like: Public WithEvents myOlApp As Outlook.Application Sub Initialize_handler() Set myOlApp = CreateObject("Outlook.Application") End Sub Private Sub myOlApp_NewMail() Dim myInbox As Outlook.MAPIFolder Dim myItem As Outlook.MailItem Set myInbox = myOlApp.GetNamespace("MAPI").GetDefaultFolder(olFolderInbox) Set mySenderName = myItem.SenderName On Error GoTo ErrorHandler Set myDestinationFolder = myInbox.Folders.Add(mySenderName, olFolderInbox) Set myItems = myInbox.Items Set myItem = myItems.Find("[SenderName] = " & mySenderName) myItem.Move myDestinationFolder ErrorHandler: Resume Next End Sub Update #2 Split the code as follows: Sent a test message and nothing happened. The instructions for actually triggering a message when a new message arrives are a little light on details (for example, no mention is made regarding ThisOutlookSession and how to use it). Thank you.

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >