Search Results

Search found 2874 results on 115 pages for 'magic quotes gpc'.

Page 81/115 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Windows XP Installation problems

    - by Samurai Waffle
    I'm having trouble installing Windows XP on a computer... My friend gave me her old computer, it was riddled with viruses and ran extremely slow. I did my best to clean it out, and after a bit I discovered it had a boot sector virus. So I downloaded the Ultimate Boot CD (installed it on a flash drive), and ran Darik's nuke and boot to completely wipe the hard drive. I then tried to reinstall Windows XP from a USB drive... It doesn't work. The computer just stalls and never boots. The computers dvd drive doesn't work, so I borrowed a spare drive that another friend had, and tried to run a Windows XP cd. For a bit I got the stop 7B error, but now it just stalls like the USB drive does. Since then I've booted back into the Ultimate Boot CD, and ran partition magic. Repartitioned the Hard Drive, and copied the files on the Windows cd to the hard drive. I was wondering if there is any way I can make it run the setup.exe off the hard drive. I have the UBCD at my disposal, but have yet to come up with a way to do it. Any help is greatly appreciated.

    Read the article

  • Convert mp4 video to a format xbox 360 can play

    - by Björn Lindqvist
    Here is a video file my Xbox 360 refuses to play: $ MP4Box -info video.mp4 * Movie Info * Timescale 90000 - Duration 02:18:33.365 Fragmented File no - 2 track(s) File Brand mp42 - version 0 Created: GMT Sat Jul 21 07:08:55 2012 File has root IOD (9 bytes) Scene PL 0xff - Graphics PL 0xff - OD PL 0xff Visual PL: ISO Reserved Profile (0x7f) Audio PL: High Quality Audio Profile @ Level 2 (0x0f) No streams included in root OD iTunes Info: Encoder Software: HandBrake 0.9.6 2012022800 Track # 1 Info - TrackID 1 - TimeScale 90000 - Duration 02:18:33.235 Media Info: Language "Undetermined" - Type "vide:avc1" - 199318 samples Visual Track layout: x=0 y=0 width=1280 height=688 MPEG-4 Config: Visual Stream - ObjectTypeIndication 0x21 AVC/H264 Video - Visual Size 1280 x 688 AVC Info: 1 SPS - 1 PPS - Profile High @ Level 4.1 NAL Unit length bits: 32 Self-synchronized Track # 2 Info - TrackID 2 - TimeScale 48000 - Duration 02:18:33.365 Media Info: Language "English" - Type "soun:mp4a" - 389689 samples MPEG-4 Config: Audio Stream - ObjectTypeIndication 0x40 MPEG-4 Audio MPEG-4 Audio AAC LC - 6 Channel(s) - SampleRate 48000 Synchronized on stream 1 $ avconv -i video.mp4 avconv version 0.8.4-4:0.8.4-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:33 with gcc 4.6.3 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: mp42isomavc1 creation_time : 2012-07-21 07:08:55 encoder : HandBrake 0.9.6 2012022800 Duration: 02:18:33.36, start: 0.000000, bitrate: 2299 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 1280x688, 1973 kb/s, 23.98 fps, 90k tbr, 90k tbn, 180k tbc Metadata: creation_time : 2012-07-21 07:08:55 Stream #0.1(eng): Audio: aac, 48000 Hz, 5.1, s16, 319 kb/s Metadata: creation_time : 2012-07-21 07:08:55 At least one output file must be specified What tool, such as ffmpeg or mencoder, and what magic command line incantation should I use to transcode this file into a format Xbox 360 can play? I want the transcode process to retain as good video quality as possible.

    Read the article

  • What breaks in a Windows domain if a member has a high time skew?

    - by Ryan Ries
    It's taken for granted by most IT people that in a Windows domain, if a member server's clock is off by more than 5 minutes (or however many minutes you've configured it for) from that of its domain controller - logons and authentications will fail. But that is not necessarily true. At least not for all authentication processes on all versions of Windows. For instance, I can set my time on my Windows 7 client to be skewed all to heck - logoff/logon still works fine. What happens is that my client sends an AS_REQ (with his time stamp) to the domain controller, and the DC responds with KRB_AP_ERR_SKEW. But the magic is that when the DC responds with the aforementioned Kerberos error, the DC also includes his time stamp, which the client in turn uses to adjust his own time and resubmits the AS_REQ, which is then approved. This behavior is not considered a security threat because encryption and secrets are still being used in the communication. This is also not just a Microsoft thing. RFC 4430 describes this behavior. So my question is does anyone know when this changed? And why is it that other things fail? For instance, Office Communicator kicks me off if my clock starts drifting too far out. I really wish to have more detail on this. edit: Here's the bit from RFC 4430 that I'm talking about: If the server clock and the client clock are off by more than the policy-determined clock skew limit (usually 5 minutes), the server MUST return a KRB_AP_ERR_SKEW. The optional client's time in the KRB-ERROR SHOULD be filled out. If the server protects the error by adding the Cksum field and returning the correct client's time, the client SHOULD compute the difference (in seconds) between the two clocks based upon the client and server time contained in the KRB-ERROR message. The client SHOULD store this clock difference and use it to adjust its clock in subsequent messages. If the error is not protected, the client MUST NOT use the difference to adjust subsequent messages, because doing so would allow an attacker to construct authenticators that can be used to mount replay attacks.

    Read the article

  • Problem building PHP extension module

    - by tixrus
    I'm trying to get GD into my PHP. So I compiled it, made php.ini point to it, restarted apache etc. But no GD. So in apache error log it says PHP Warning: PHP Startup: gd: Unable to initialize module\nModule compiled with module API=20060613\nPHP compiled with module API=20090115\nThese options need to match\n in Unknown on line 0 So a bit of googling says I should not use the phpize I have before configuring and making these. I should use a new one called phpize5. I surely don't have any such thing. Unless its packed up inside something else in my php5.3. distro. Where do you get it. In Ubuntu I could just run sudo apt-get install php-dev, (apparently) and it would just appear by magic. At least that's what the webpage said. Unfortunately I am running MacOSX version Leopard. How can I build this GD module so that it will match the API number in my PHP?

    Read the article

  • Passenger throwing undefined method `-@' for "master":String after Puppet 3.0.0 upgrade

    - by Andy Shinn
    My Puppet master is using Passenger to serve. After upgrading to Puppet 3.0.0 I am getting the following error: [ pid=17576 thr=70231398486460 file=utils.rb:176 time=2012-10-01 17:37:12.892 ]: *** Exception NoMethodError in PhusionPassenger::Rack::ApplicationSpawner (undefined method `-@' for "master":String) (process 17576, thread #): from config.ru:7 from /usr/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `instance_eval' from /usr/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `initialize' from config.ru:1:in `new' from config.ru:1 My config.ru is as follows: # a config.ru, for use with every rack-compatible webserver. # SSL needs to be handled outside this, though. # if puppet is not in your RUBYLIB: # $LOAD_PATH.unshift('/opt/puppet/lib') $0 = "master" # if you want debugging: # ARGV << "--debug" ARGV << "--rack" # Rack applications typically don't start as root. Set --confdir to prevent # reading configuration from ~/.puppet/puppet.conf ARGV << "--confdir" << "/etc/puppet" # NOTE: it's unfortunate that we have to use the "CommandLine" class # here to launch the app, but it contains some initialization logic # (such as triggering the parsing of the config file) that is very # important. We should do something less nasty here when we've # gotten our API and settings initialization logic cleaned up. # # Also note that the "$0 = master" line up near the top here is # the magic that allows the CommandLine class to know that it's # supposed to be running master. # # --cprice 2012-05-22 require 'puppet/util/command_line' # we're usually running inside a Rack::Builder.new {} block, # therefore we need to call run *here*. run Puppet::Util::CommandLine.new.execute Any idea what may be happening?

    Read the article

  • Hyper-V Virtual Machine Networking issues related to Max Ethernet Frame Size

    - by Goatmale
    I fixed an issue today earlier today but i'm interested in learning WHY it worked. We set up a new Hyper-V virtual machine only to discover that HTTP traffic wasn't working. HTTPS, pings, everything else was working fine. After months of prodding around I took a shot in the dark. On the Hyper-V host server, the physical NIC card had an advanced setting of "Max Ethernet Frame Size" set to 1500. After setting this setting to 1514 the issue was fixed. Alternatively, setting this to 1512 did not solve the issue; 1514 is the magic number. My best guess it that when this setting was set to 1500 it was allowing incoming pings because the data payload was a lot smaller of say, HTTP traffic. As far as HTTPS traffic, I read about something called "Path MTU discovery" which i'm going to assume why is HTTPs traffic was getting through fine, albeit slower. Looking at this post, people agree that 1518 is the max total frame size. Why didn't I need to change this to 1518 instead of 1514 bytes? Why is the default frame size 1500 if that's the max size of the Ethernet payload and not the max size.

    Read the article

  • ADF Enterprise Application Development - Made Simple (Book Review)

    - by Frank Nimphius
      Sten E. Vesterli wrote the "Oracle ADF Enterprise Application Development – Made Simple" book published by Packt Publishing in 2011 http://www.packtpub.com/oracle-adf-enterprise-application-development/book A common question on OTN, but also when talking to clients or customers is about where and how to start your ADF application development. Especially when the current programming background is not in Java, but 4 GL or PLSQL, developers often look for answers to the following questions: · How long does it take to learn Oracle ADF ? · How long does it take to replace a Forms application with ADF ? · How many developers do I need? · Do I need to know Java to use ADF and if yes, how good do I need to know this? · How do I structure my programming files, organizing them in JDeveloper work spaces, projects and libraries? · What is best practices for naming Java packages and how to void naming conflicts in ADF in general? · How many Application Modules do I need or should I create? · How to test applications? Sten Vesterli answers all of the above questions and more in his book http://www.packtpub.com/oracle-adf-enterprise-application-development/book , which makes it great value add to the 3 existing Oracle ADF books. In order of complexity (which also is the order in which reading the available Oracle ADF books makes sense), in my opinion, Sten's book should come second – though it also is useful to those that are already more advanced with Oracle ADF. So if you are absolutely new to Oracle ADF, then the order of books to read to get you up on an expert level should be: 1. Grant Ronald; "Quick Start Guide to Oracle Fusion Development: Oracle JDeveloper and Oracle ADF" (McGraw Hill 2010) 2. Sten Vesterli; "Oracle ADF Enterprise Application Development – Made Simple" (Packt Publishing 2011) 3. Duncan Mills, Peter Koletzke; " Oracle JDeveloper 11g Handbook: A Guide to Fusion Web Development" (McGraw Hill 2009) 4. Frank Nimphius, Lynn Munsinger; " Oracle Fusion Developer Guide: Building Rich Internet Applications with Oracle ADF Business Components and Oracle ADF Faces" (McGraw Hill 2010) If you are not new to Oracle ADF and Orace JDeveloper, then buy Sten Vesterli's book anyway. It is worth it and you want to have it on your book shelf. See below the table of content to get a better idea of what this book covers: · Chapter 1: The ADF Proof of Concept · Chapter 2: Estimating the Effort · Chapter 3: Getting Organized · Chapter 4: Productive Teamwork · Chapter 5: Prepare to Build · Chapter 6: Building the Enterprise Application · Chapter 7: Testing your Application · Chapter 8: Look and Feel · Chapter 9: Customizing the Functionality · Chapter 10: Securing your ADF Application · Chapter 11: Package and Deliver · Appendix: Internationalization The book is written with a lot of good humor, which makes the read very enjoyable (from a geek's perspective, of course). My favorite quote – just in case you are interested - is from page 97, when Sten talks about getting organized: " Stop sending e-mails to your team. Just stop it. E-mail is so last century.…" So true, so true! This quote's runner up is the "boss key" on page 128 where Sten talks about productivity and how Oracle Team Productivity Center (TPC) can help you with this. Quotes like these stick to your brains and make sure you never forget. Go for it!

    Read the article

  • How have multiple web servers and IPs on the same physical network

    - by jsigned
    I do web development out of a small office and need to have multiple physical and virtual servers that can be accessed from the internet. I also have a number of devices (computers, laptops, tablets, printers, etc) that need connections as well. I have gotten a subnet of 8 IP's from my ISP and while that is adequate for the web servers its far too small for everything that needs access to the network. My router is an ASUS RT-N16 running DD-WRT. I'm just smart enough about this routing topic to be dangerous, think 2 year old with a magic marker. I would like to keep my internal network NAT'ed on the 192.168.x.x network and route the 68.69.x.x 255.255.255.248 traffic directly to the servers. The physical network consists of the 4 port DD-WRT router and an unmanaged gig switch. I have a fiber connection to the office that works as an Ethernet port. In other words I can plug my laptop directly into it and have access to the internet. There is no login or password and the router is setup to get DHCP from the ISP, and to provide DHCP addresses for the internal network. What I've done so far is google and try different configurations with little success. In the end I decided I didn't even know how to ask the questions needed. My questions are: Is this the best way to configure the network? How do you do it? VLANs? Multiple routers? I've never had to configure a router using anything more than the GUI so if this is command line stuff be gentle.

    Read the article

  • Move an existing RAID 5 array from Ubuntu to Gentoo

    - by Cocoabean
    I have a 64-bit Ubuntu machine with a 4-disk RAID 5 using software raid (md). I've been able to boot an Ubuntu LiveCD and recognize the array with a simple mdadm -A /dev/md0. It was easy to mount after that and nothing had to rebuild. I'm installing Gentoo on this box now (multi-boot, non-RAID root partition) and I have md auto-detect turned on in the kernel. When I boot Gentoo I get: "invalid superblock magic on sdd" for each of the drives in the array. I boot back to Ubuntu and they mount no problem. I tried copying the mdadm.conf that works in Ubuntu to Gentoo, and then ran mdadm -A /dev/md0 but it reports that there is no array named md0. I don't want to lose data (obviously) and I don't want to have to let the RAID rebuild every time I switch between OSes. Any help is appreciated. Both are using mdadm 3.1.4 Both are running 64-bit kernels. mdadm -D /dev/md0 from Ubuntu yields: http://pastebin.com/5gj2QNkV UPDATE: After rebooting I noticed that it still complains about invalid blocks, but cat /proc/mdstat shows an inactive /dev/md127 with the same disks as my raid. I want to mount it but I don't want to get stuck waiting for a rebuild or destroying it inadvertently. mdadm -D /dev/md127 Here is pastebin of mdadm -D /dev/md127 on gentoo: http://pastebin.com/gDCWn0Rn UPDATE II: dmesg output about 'invalid raid superblocks' http://paste.ubuntu.com/885471/ fdisk -l from Ubuntu, /dev/md0 does not have any partitions but I do have it mounted and accessible: http://paste.ubuntu.com/885475/

    Read the article

  • Viability of Apache (MPM Worker), FastCGI PHP 4/5.2/5.3, and MySQL 5

    - by Adrian
    My server will be hosting numerous PHP web applications ranging from Joomla, Drupal, and some legacy (read: PHP4) and other custom-built code inherited from clients. This will be a development machine used by a dozen or so web developers and issues like fluctuating loads or particularly high load expectations are not important. Now, my question: are there any concerns I should know about when using Apache w/ MPM Worker, PHP 4/PHP 5.2/PHP 5.3 (all via FastCGI), and MySQL 5 (with a query cache of 64MB)? I have not tested the various applications extensively and I have only recently learned how to install PHP and utilize it via FastCGI (rather than mod_php, which in this case seemed impossible (considering the multiple versions of PHP and the desire to use MPM Worker over MPM Prefork)). I have come to understand that there could be concerns regarding XCache and APC, namely non-thread-safety issues where data becomes corrupted and the capability to use MPM Worker becomes null and void. Is this a valid concern? I have been using my personal testing server (running Ubuntu Server Edition 10.04 in VirtualBox) which has 2GB of RAM available to it. Here is the configuration used (the actual server will likely use a configuration more tailored to suit it's purposes): Apache: Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Worker: <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 400 MaxRequestsPerChild 2000 </IfModule> PHP ./configure (PHP 4.4.9, PHP 5.2.13, PHP 5.3.2): --enable-bcmath \ --enable-calendar \ --enable-exif \ --enable-ftp \ --enable-mbstring \ --enable-pcntl \ --enable-soap \ --enable-sockets \ --enable-sqlite-utf8 \ --enable-wddx \ --enable-zip \ --enable-fastcgi \ --with-zlib \ --with-gettext \ Apache php-fastcgi-setup.conf FastCgiServer /var/www/cgi-bin/php-cgi-5.3.2 FastCgiServer /var/www/cgi-bin/php-cgi-5.2.13 FastCgiServer /var/www/cgi-bin/php-cgi-4.4.9 ScriptAlias /cgi-bin-php/ /var/www/cgi-bin/

    Read the article

  • How to get a new-pssession in PowerShell to talk to my ICS-connected laptop for Remoting

    - by Scott Bilas
    If I have my laptop on the LAN, then Powershell remoting works fine from my workstation to the laptop. However, the LAN is wireless, and so sometimes I will connect on a wire to my workstation. It has two ethernet ports so I have the secondary wired up to share to the laptop using Win7's Internet Connection Sharing. (Btw I know that avoiding ICS would solve the problem, but that's not an option right now.) So my question is: what magic registry bits or command line options do I need to flip to get remoting to work to my laptop through ICS? Here's what happens when I try it: new-pssession -computername 192.168.137.161 [192.168.137.161] Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Default authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more information on how to set TrustedHosts run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed I'm having a hard time understanding the documentation for PowerShell and WinRM. I've tried messing with allowing ports in the firewall and setting TrustedHosts to * on my workstation (don't think this is a good idea on the laptop). I have no idea where to go from here, would appreciate any help.

    Read the article

  • Joy! | Important Information About Your iPad 3G

    - by Jeff Julian
    Looks like I was one of the lucky 114,000 who AT&T lost their email to “hackers”.  Why is “hackers” in “double quotes”.  I can just imagine some executive at AT&T in their “Oh No, We Messed Up Meeting” saying, what happened?  Then someone replied, well we have had a breach and “hackers” broke in (using the quote in the air gesture) and stole our iPad 3G customers emails. Oh well, I am sure my email has been sold and sold again by many different vendors, why not AT&T now.  At least Dorothy Attwood could have gave us her email to give to someone else instead of blinking it through a newsletter system. June 13, 2010 Dear Valued AT&T Customer, Recently there was an issue that affected some of our customers with AT&T 3G service for iPad resulting in the release of their customer email addresses. I am writing to let you know that no other information was exposed and the matter has been resolved.  We apologize for the incident and any inconvenience it may have caused. Rest assured, you can continue to use your AT&T 3G service on your iPad with confidence. Here’s some additional detail: On June 7 we learned that unauthorized computer “hackers” maliciously exploited a function designed to make your iPad log-in process faster by pre-populating an AT&T authentication page with the email address you used to register your iPad for 3G service.  The self-described hackers wrote software code to randomly generate numbers that mimicked serial numbers of the AT&T SIM card for iPad – called the integrated circuit card identification (ICC-ID) – and repeatedly queried an AT&T web address.   When a number generated by the hackers matched an actual ICC-ID, the authentication page log-in screen was returned to the hackers with the email address associated with the ICC-ID already populated on the log-in screen. The hackers deliberately went to great efforts with a random program to extract possible ICC-IDs and capture customer email addresses.  They then put together a list of these emails and distributed it for their own publicity. As soon as we became aware of this situation, we took swift action to prevent any further unauthorized exposure of customer email addresses.  Within hours, AT&T disabled the mechanism that automatically populated the email address. Now, the authentication page log-in screen requires the user to enter both their email address and their password. I want to assure you that the email address and ICC-ID were the only information that was accessible. Your password, account information, the contents of your email, and any other personal information were never at risk.  The hackers never had access to AT&T communications or data networks, or your iPad.  AT&T 3G service for other mobile devices was not affected. While the attack was limited to email address and ICC-ID data, we encourage you to be alert to scams that could attempt to use this information to obtain other data or send you unwanted email. You can learn more about phishing by visiting the AT&T website. AT&T takes your privacy seriously and does not tolerate unauthorized access to its customers’ information or company websites.   We will cooperate with law enforcement in any investigation of unauthorized system access and to prosecute violators to the fullest extent of the law. AT&T acted quickly to protect your information – and we promise to keep working around the clock to keep your information safe.  Thank you very much for your understanding, and for being an AT&T customer. Sincerely, Dorothy Attwood Senior Vice President, Public Policy and Chief Privacy Officer for AT&T Technorati Tags: AT&T,iPad 3G,Email

    Read the article

  • Ubuntu Wubi "drive" failure; mount drive in XP?

    - by 618034
    Hi there, I installed the Wubi distribution of Ubuntu on a separate partition (which is silly, since why do I care if Windows can still manage the partition?) a few months back; it was pretty awesome, until Linux hosed. At this point, I can get Ubuntu to boot if I try really hard through grub, but once it does start, the screen is hosed, so no dice. At this point, I'd like to wipe it all and start over, but I need to get some stuff off the "disk". The Wubi install makes this difficult, since the "disk" is a flat file on an NTFS partition. I've done just about everything I can think of — I renamed the virtual disk .iso, mounted it with VirtualCloneDrive, then used whatever magic EXT3 (EXT4?) readers I could dig up on the Internet to parse the mount — but nothing's working. Can you offer any suggestions? The "disk" is currently in D:\ubuntu\disks\root.iso. Many thanks! (I may be high-latency at the moment, apologies if I don't address follow-ups quickly)

    Read the article

  • How to find the real IP to which IPVS is routing a virtual IP

    - by Wayne Conrad
    I'm trying to find a problem server hiding behind a virtual IP (using LVS/ipvs). I've got a test program that sends requests to the virtual IP until it gets the bad response, but how can I tell to which real IP a request to the virtual IP got routed? On the box doing the virtual IP magic, here's the virtual IP configuration (for the service I care about): IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn ... TCP 10.1.0.254:5025 nq -> 10.1.0.5:5025 Route 1 0 1 -> 10.1.0.6:5025 Route 1 0 5 -> 10.1.0.7:5025 Route 1 0 2 -> 10.1.0.9:5025 Local 1 0 3 -> 10.1.0.11:5025 Route 1 0 3 ... My client program is sending TCP requests to 10.1.0.254:5025, usually getting a good response but sometimes a bad response. With this few servers, I could send my request to each server in turn until I discover the culprit, but I wonder if that technique will scale as we add servers. What means exist for me to find out where requests got routed? Kernel: Linux 2.6.32 OS: Debian testing (whatever that's called these days). ipvsadm is version 1.25, compiled with ipvs v1.2.1

    Read the article

  • How to download Vim script on the command-line?

    - by HaiYuan Zhang
    Whenever I want to install a new Vim script on the Linux server I'm working on, my typical workflow is as the following: surf the plugin's homepage in Vim online using FireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to Linux server using WinSCP which is really inconvenient. I don't know what is the magic behind this: I mean for the same hyperlink I click it in web browser. I can let you download it but use Wget plus the hyperlink in Linux command-line will end up with nothing but an error indication. Hyperlink in the web browser. Otherwise I can get the link in web browser and then use Wget or some similar tool to actually do the downloding. I try new cool Vim scripts quite ofte , so you can imagine my dismay when I have to repeat the tedious action all the time. What are some tips which can let me download the Vim scripts in a more "professional" way? Post edit: My problem is not find a tool like Wget or cURL. The problem I met is quite specific; to use these tools to download a Vim script. Let's take http://www.vim.org/scripts/script.php?script_id=30 as an example. It's the normal place where one can get the script, at least for me. But I can't find an working URL from this page that can feed to Wget.

    Read the article

  • Workstations cannot see new MS Server 2008 domain, but can access DHCP. (solved)

    - by Radix
    The XP Pro workstations do not see the new replacement domain upon boot; they only see their cached entry for the old (server 2003) domain controller. The old_server is not connected to the network. I have DHCP working with the same scope as the old_server. In my "before-asking" search for a solution I came across the following two articles, and I recall doing things as suggested by the articles. http://www.windowsreference.com/windows-server-2008/how-to-setup-dhcp-server-in-windows-server-2008-step-by-step-guide/ http://www.windowsreference.com/windows-server-2008/step-by-step-guide-for-windows-server-2008-domain-controller-and-dns-server-setup/ The only possible issue is: I was under the impression that the domain netbios needed to match the DC's netbios. The DC netbios is city01 while the domain's FQDN is city.domain.org (I think this is mistaken and should have been just domain.org) But, the second link led me to a post which I believe answers my question. I did as they instructed by opening Local Area Connection Properties, then selecting TCP/IPv4 and setting the sole preferred DNS server to the local hosts static IP (10.10.1.1). Search for "Your problems should clear up" for the post I'm referencing: http://forums.techarena.in/active-directory/1032797.htm Have I misunderstood their instructions? I am hoping to reach the point where I can define users and user groups. Also, does TechNet have a single theoretical overview document I could read. I really don't like treating comps as magic. I will be watching this closely and will quickly answer any questions. If I've left anything out it is because I did not know it was needed. PS: I am loath to ask obviously basic questions, but I am tired and wish to fix this before tomorrow. Also, this is my first server installation, thank you for your help.

    Read the article

  • Why apache throws 403 on index file after install?

    - by den-javamaniac
    Hi. I've just installed apache and php from sources using next commands: ./configure --prefix="/mnt/workspace/servers/web/apache-2.2.17" \ --enable-info --enable-rewrite --enable-usertrack --enable-mime-magic for apache and ./configure --with-apxs2=/mnt/workspace/servers/web/apache-2.2.17/bin/apxs \ --prefix=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-config-file-path=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-mysql=mysqlnd for php. After adjusting configuration (httpd.conf) and starting apache it gives a 403 response on http://localhost:8060/index.html (presuming that 8060 is used) request. There are next directory settings in httpd.conf: <Directory "/mnt/workspace/servers/web/apache-2.2.17/htdocs"> ... Order allow,deny Allow from all ... </Directory> <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> It should be noted that I've got apache on a mounted (default auto mount configured while installing ubuntu) partition. Log Files Access log: ::1 - - [12/Feb/2011:17:48:30 +0200] "GET / HTTP/1.1" 403 202 ::1 - - [12/Feb/2011:17:48:31 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /favicon.ico HTTP/1.1" 403 213 Error log: [Sat Feb 12 18:59:13 2011] [notice] Apache/2.2.17 (Unix) PHP/5.3.5 configured -- resuming normal operations [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to / denied [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to /favicon.ico denied [Sat Feb 12 18:59:36 2011] [error] [client ::1] (13)Permission denied: access to /index.html denied

    Read the article

  • Expand disk space on Ubuntu 10.04 (VMWare Guest)

    - by Jason Clawson
    I need to resize the disk space of an ubuntu guest in VMWare Workstation. After using the expand disk utility in vmware workstation, I need to do some linux magic to resize the parition. I have searched and found a lot of posts about resizing it. Unfortunately I don't really understand it all that well. Can anyone help me out with this? df -h gives me: Filesystem Size Used Avail Use% Mounted on /dev/mapper/ubuntu-root 19G 2.6G 16G 15% / none 496M 172K 495M 1% /dev none 500M 0 500M 0% /dev/shm none 500M 64K 500M 1% /var/run none 500M 0 500M 0% /var/lock none 500M 0 500M 0% /lib/init/rw none 19G 2.6G 16G 15% /var/lib/ureadahead/debugfs /dev/sda1 228M 36M 181M 17% /boot lvs says: LV VG Attr LSize Origin Snap% Move Log Copy% Convert root ubuntu -wi-ao 18.88g swap_1 ubuntu -wi-ao 884.00m fdisk -l says: Disk /dev/sda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00033718 Device Boot Start End Blocks Id System /dev/sda1 * 1 32 248832 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 32 2611 20719617 5 Extended /dev/sda5 32 2611 20719616 8e Linux LVM I really appreciate the help.

    Read the article

  • Event Log "Wake Source" when system wakes from sleep

    - by Doltknuckle
    So I've been troubleshooting sleep timers for our systems and have run across an interesting issue. I need a way to report how long a system was awake after a number of different inputs. Now, I've discovered that the System Log tracks wake and sleep events and even tells you the times that everything happens at. The thing is doesn't tell you is what triggered the wake event. It does give you a numerical code however. Here are some examples of what I am finding. Index : 2901 EntryType : Information InstanceId : 1 Message : The system has resumed from sleep. Sleep Time: 2010-10-01T23:20:06.097488100Z Wake Time: 2010-10-03T17:41:12.796400500Z Wake Source: 0 Category : (0) CategoryNumber : 0 Source : Microsoft-Windows-Power-Troubleshooter -- Index : 2841 EntryType : Information InstanceId : 1 Message : The system has resumed from sleep. Sleep Time: 2010-10-01T19:19:37.239789600Z Wake Time: 2010-10-01T21:28:48.921200800Z Wake Source: 4HID Keyboard Device Category : (0) CategoryNumber : 0 Source : Microsoft-Windows-Power-Troubleshooter So here's my question: Does anyone know what the different numerical codes for the "Wake Source" mean? I think "0" is a magic packet and "4" is a USB device. Does anyone have any idea if there is any documentation out there on this for Windows 7? Thanks in advance

    Read the article

  • How to move my data from my old MacBook Pro to my new one?

    - by Tim Büthe
    I just purchased a new MacBook Pro and already got an 2008 model. I wonder how I move all my data over to the new one. My first idea was, to use my Time Machine backup and restore from it, which seems to be a good idea and should work just fine regarding to this link: http://blog.duncandavidson.com/2008/01/restoring-from-time-machine.html. But, since my current MacBook got older Software on it, like iLife '08 instead of iLife '09 I would have to upgrade this afterwards. Is this correct, or does Time Machine does some magic to exclude well known software? And is it possible to reinstall or upgrade iLife with the included installation DVDs? My second idea is, to just swap the hard drives instead of using the Time machine backup. If it is not too complicated to remove the hdd, this should be the fastest way. This also has the benefit, that the 2008er MacBook then contains a brand new installation and I don't have to remove all my stuff or reinstall Mac OS before I give it away. My question on that second idea would be: does snow leopard handle this stuff correctly? I reboot with the new hardware and all just works fine? So in a nutshell: What would you do: restore from backup or swap drives? And what about the new software?

    Read the article

  • unreadable corrupted ntfs partition - lost clusters reported

    - by Eduardo Martinez
    Hi, partition magic is reporting multiple 'bad file record signature' and 'lost clusters' errors on my 250GB samsung sata disk (connected via usb on a xp sp3). Unfortunately PM is unable to fix. PM shows the drive as being NTFS, detects used space ok and also drive name. But PM browser (right click on partition, browse...) won't show anything (as if disk was empty) Windows Explorer is not even picking the drive name and reports 'the file or directory is corrupted and unreadable' PTDD partition table doctor demo tells me the boot sector is fine, and I can see all disk content on its browser - but crucially cannot copy that content over to a new disk (PTDD browser is pretty arid to say the least) Also tried - photorec-6.11.3 - it actually started to extract files but wouldn't keep file names or any folder structure (maybe I missed sth on the configuration options) - find and mount - intellectual scan went well, the only partition on the disk was detected, then tried to mount into p: but got this error on windows explorer: 'p:\ is not accesible. The media is write protected'. Find and mount allows you to create an image from partition but I don't have a disk big enough at hand. Does anyone know if this will keep the extracted files/folders structure intact? I'm starting to think the disk is pretty screwed and my chances to recover this data are slim. Please someone enlighten me with that marvellous piece of software I am missing :-) Thanks in advance

    Read the article

  • Process does ICMP port scan on my OSX box and I am afraid my Mac got a virus

    - by Jamgold
    I noticed that my 10.6.6 box has some process sending out ICMP messages to "random" hosts, which concerns me a lot. when doing a tcpdump icmp I see a lot of the following 15:41:14.738328 IP macpro > bzq-109-66-184-49.red.bezeqint.net: ICMP macpro udp port websm unreachable, length 36 15:41:15.110381 IP macpro > 99-110-211-191.lightspeed.sntcca.sbcglobal.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:23.458831 IP macpro > 188.122.242.115: ICMP macpro udp port websm unreachable, length 36 15:41:23.638731 IP macpro > 61.85-200-21.bkkb.no: ICMP macpro udp port websm unreachable, length 36 15:41:27.329981 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 15:41:29.349586 IP macpro > c-98-234-88-192.hsd1.ca.comcast.net: ICMP macpro udp port 54045 unreachable, length 36 I got suspicious when my router notified me about a lot of ICMP messages that don't get a response [INFO] Mon Jan 10 16:31:47 2011 Blocked outgoing ICMP packet (ICMP type 3) from 192.168.1.189 to 212.25.57.90 Does anyone know how to trace which process (or worse kernel module) might be responsible for this? I rebooted and logged in with a virgin user account and tcpdump showed the same results. Any dtrace magic welcome.

    Read the article

  • How to remove static IP from Mitel 5312 and enable DHCP

    - by jimbo
    I'm not sure this is the right forum for this question -- although I'm confident I'll be told if not! -- but I've read the fine manual (at least, such a manual as I have), I've googled and I cannot get any insight into where to even start solving this problem. I have a bunch of Mitel 5312 handsets, talking to a 3300 ICP controller. Some handsets are at a remote location, get an address from my DHCP server over there, and use the Mitel "Teleworker" extension to connect in over the Internet. The remaining handsets were set up with static IPs by a BT-supplied engineer, on the same subnet as the ICP itself. So far, so good. I have one remaining teleworker licence, and need to move a handset from the home location to the remote. I've managed to boot it and configure teleworker, but I cannot for the life of me see where I tell it to forget its static IP, and make a DHCP request. Any ideas? Should I be looking on the controller, or holding magic combinations of buttons on the handset itself? EDIT: Following some advice from Robert, below, I've broken out a spare device and reassigned the profile for this user's extension to the MAC of the new phone, and a new profile to the old MAC. Unfortunately this still doesn't get me anywhere -- the new handset now asks for the teleworker install password. I suspect I'm going to have to get a Mitel engineer involved here, since I've never been given that password... Unless anyone has any great ideas?

    Read the article

  • how to download vim script in command line

    - by HaiYuan Zhang
    whenever I want to install a new vim script to the linux server I'm working in , my typical workflow is as the following: surf the plugin's homepage in vim online using fireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to linux server using winscp which is really inconvenient. I don't know what is the magic behind this : I mean for the same hyperlinki click it in web browser I can let you download it but use wget plus the hyperlink in linux commandline will end up with nothing but error indication. hyperlink in web browser . otherwise I can get the link in web browser and then use wget or some similar tool to actually do the downloding. I try new cool vim scripts quite often , so you can imagin my dismay when have to repeat the tedious action all the time. So if anyone of you knows some tips which can let me downloading the vim scripts in a more "professional" way, I'll appreciate it a lot. post edit : My problem is not find a tool like wget or curl . The problem I met is quite specific to use these tools to download vim script. let's take http://www.vim.org/scripts/script.php?script_id=30 as an example, it's the normal place where one can get the script, at least for me. but I can't find an working url from this page that can feed to wget .

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 21-27, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of October 21-27, 2012. OTN Architect Day: Los Angeles This is your brain on IT architecture. Stuff your cranium with architecture by attending Oracle Technology Network Architect Day in Los Angeles, October 25, 2012, at the Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Technical sessions, panel Q&A, and peer roundtables—plus a free lunch. [NOTE: The event was last week, of course. Big thanks to the session presenters and especially to those Angelinos who came out for the event.] WebLogic Server 11gR1 Interactive Quick Reference"The WebLogic Server 11gR1 Administration interactive quick reference," explains Juergen Kress, "is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner." Podcast: Are You Future Proof? The latest OTN ArchBeat Podcast series features Oracle ACE Directors Ron Batra, Basheer Khan, and Ronald van Luttikhuizen, three practicing architects in an open discussion about how changes in enterprise IT are raising the bar for success for software architects and developers. Play Oracle Vanquisher Here's a little respite from whatever it is you normally spend your time on. Oracle Vanquisher is an online diversion that makes a game of data center optimization. According to the description: "Armed with a cool Oracle vacuum pack suit and a strategic IT roadmap, you will thwart threats and optimize your data center to increase your company’s stock price and boost your company’s position." Mainly you avoid electric shock and killer birds. The current high score belongs to someone identified as 'TEN." My score? Never mind. Advanced Oracle SOA Suite OOW 2012 PresentationsThe Oracle SOA Product Management team has compiled a complete list of all twelve of their Oracle SOA Suite presentations from Oracle OpenWorld 2012, with links to the slide decks. OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call "Academies." "These indexes contain the articles we’ve written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Oracle’s Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Oracle ACE Directors Nordic Tour 2012 : Venues and BI Presentations | Mark RittmanOracle ACE Director Mark Rittman shares information on the Oracle ACE Director Tour, as the community leaders make their way through the land of the midnight sun, with events in Copenhagen, Stockholm, Oslo and Helsinki. Following the Thread in OSB | Antony Reynolds Antony Reynolds recently led an Oracle Service Bus POC in which his team needed to get high throughput from an OSB pipeline. "Imagine our surprise when, on stressing the system, we saw it lock up, with large numbers of blocked threads." He shares the details of the problem and the solution in this extensive technical post. OW12: Oracle Business Process Management/Oracle ADF Integration Best Practices | Andrejus Baranovskis The Oracle OpenWorld presentations keep coming! Oracle ACE Director Andrejus Baranovskis shares the slides from "Oracle Business Process Management/Oracle ADF Integration Best Practices," co-presented with Danilo Schmiedel from Opitz Consulting. Thought for the Day "Not everything that can be counted counts, and not everything that counts can be counted." — Albert Einstein (March 14, 1879 – April 18, 1955) Source: Quotes For Software Engineers

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >