Search Results

Search found 19966 results on 799 pages for 'wild thing'.

Page 535/799 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • Core i7-620M vs Core i5-540M

    - by Shalan
    Hi, I'm recommending a laptop to a colleague, and the specific laptop he has chosen has the above CPU chips as options. Both chips have 2-cores/4-threads. The i7-620M is a 2.66 GHz (4MB Cache) while the i5-540M is a 2.53 GHz (3MB Cache)....both Arrandale architecture. He is a .NET programmer working with SQL Server and Oracle, and occasionally uses Adobe Fireworks for web-related design elements. He also loves playing around in Adobe Premiere Pro, and does a lot of media/video work. Would you notice any significant performance difference between the 2? The laptop manufacturer claims that the battery life on both is the same irrespective of the chip used (although I find that hard to believe), but there is a major cost difference between them, with the Core i7-620M being the more expensive. According to http://ark.intel.com, the one thing that seems different (besides the obvious speeds/frequencies/etc) is a feature called "Embedded" - what is this exactly? You can see the quick comparison here - http://ark.intel.com/Compare.aspx?ids=43544,43560 I would sincerely appreciate any advice me on this. THANKS!

    Read the article

  • Windows 7 pro remote desktop true multi monitor support patch or hack

    - by Ryan D
    After hours of research the closest i came to any patch that could fix this was a concurrent sessions patch which is not what i want. I have two machines both windows 7 professional. Its not an option to upgrade either to ultimate as the computer being remoted is in another state at our corp office. I need Dual monitor support and would have used the multimon i:1 edit thing however the other tower is not Win 7 ultimate. I have read over and over how it is not supported except with ultimate and enterprise and 2008 server. What have they put into windows 7 ultimate that is not in Win 7 pro and how can i get it or patch Win 7 pro to give it the same functionality. I would have paid for a software patch had i been able to find it anywhere. Summary: I am looking for the ability to remote to a Win 7 pro computer with another Win 7 computer while being able to use Dual monitors. Is there anyone that has the skill or balls to help with this? Most Respectfully Ryan

    Read the article

  • Hyper-V Server 2012 - "No Active Network Adapter Found"

    - by Vazgen
    I just installed Hyper-V Server 2012 and first thing I get when logging in to the host is "No Active Network Adapter Found". I know I have to use the following command: pnputil –i –a <inf file> I found the Realtek Network Driver for my MB for Windows 7 here: http://za.asus.com/Motherboards/AMD_AM3Plus/M5A97_PRO/#download But there is not one for Windows 8 so I tried using the Win 7 one with PnPUtil, and I got: Failed to install the driver on any of the devices on the system: No more data is available. Does this really mean I can't use Hyper-V Server 2012 with my motherboard? This is hard to believe because I installed Windows Server 2012 directly (prior to installing Hyper-V Server 2012) on this same computer and I had internet access on that server without even configuring any drivers. Ok so I went to PowerShell from cmd and typed Get-NetIPConfiguration And it said the name of my driver (Realtek) and "Disconnected". I switched the Ethernet wire from my primary computer with my server and everything was fixed. Interestingly, my primary computer connects to the internet just fine with the wire that was previously in the server, just with much slower speed. Does anyone know what could be the difference between these 2 Ethernet wires?

    Read the article

  • WebSphere Plugin Keystore Unreadable by IHS - GSK_ERROR_BAD_KEYFILE_PASSWORD

    - by Seer
    Running WAS 6.1.xx in a network deployment. The IBM provided plugin keystore's "plugin-key.kdb" password expires on april 26th along with the personal cert inside it. So no problem right? Create new cert and set new password on the kdb, restash the password and off we go! Well no! On restart of IBM HTTP Server we see [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: lib_security: logSSLError: str_security (gsk error 408): GSK_ERROR_BAD_KEYFILE_PASSWORD [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: lib_security: initializeSecurity: Failed to initialize GSK environment [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: ws_transport: transportInitializeSecurity: Failed to initialize security [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: ws_server: serverAddTransport: Failed to initialize security [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: ws_server: serverAddTransport: HTTPS Transport is skipped [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: lib_security: logSSLError: str_security (gsk error 408): GSK_ERROR_BAD_KEYFILE_PASSWORD [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: lib_security: initializeSecurity: Failed to initialize GSK environment [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: ws_transport: transportInitializeSecurity: Failed to initialize security [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: ws_server: serverAddTransport: Failed to initialize security [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: ws_server: serverAddTransport: HTTPS Transport is skipped Here is the thing ... I can open the keystore using the new password with ikeyman and keytool. Using some "slightly dodgy" script, I can reverse the stash file and see that indeed the new password is set. Then, even if I restore the old keystore files (plugin-key.kdb,plugin-key.crl,plugin-key.sth,plugin-key.rdb) they no longer work either! So it must be permissions right? Well the permissions are the same as before, if I switch to apache user I can browse right through to the files and read them. I have even chown'ed them to apache:apache and/or chmod 777 and still its the same error! Does anyone have a clue what is going on here? Its pretty urgent as our site will be without HTTPS in a couple of days if this isn't resolved - thats bad for a retail web site :)

    Read the article

  • Ubuntu boot hangs after message "Running /scripts/init-bottom ... done"

    - by Douglas B. Staple
    I've been trying to copy a Proxmox container based on the Ubuntu Precise Standard template to a VirtualBox VM. I am now stuck at a point where my new Ubuntu/VirtualBox VM hangs after the message "Running /scripts/init-bottom ... done" during boot. I started by installing Ubuntu Server 12.04.4 LTS on a VirtualBox VM. Ubuntu Server 12.04.4 LTS was the closest "official" Ubuntu ISO to the Proxmox container OS I could find. I installed all updates on both the Proxmox container and on the VirtualBox VM. The idea was to get same version kernal running on the ProxMox container and VirtualBox VM. sudo apt-get update ; sudo apt-get upgrade ; sudo apt-get dist-upgrade sudo reboot rsync the entire proxmox container to a temporary directory in the VirtualBox VM: cd / mkdir /tmp/backup rsync -e ssh -av --exclude={/dev,/proc,/sys,/tmp,/run,/mnt,/media,/lost+found,/boot,/selinux} root@my_proxmox_container_hostname:/ /tmp/backup Shut down the virtual machine, and boot the VM with a bootable linux image. I used the Desktop image of Ubuntu 12.04 LTS, ubuntu-12.04.4-desktop-i386.iso Drop to a root prompt. Mount the VM root filesystem: sudo mount /dev/sda1 /mnt Remove files from most of /mnt cd /mnt sudo rm -rf bin etc home lib opt sbin root usr var Move all of the files from /mnt/backup into /mnt sudo mv /mnt/tmp/backup/* /mnt Rebooted system. For me, at this point the system freezes after starting, after the message: Running /scripts/init-bottom ... done I've tried reinstalling GRUB and all manner of other thing. I am almost ready to give up.

    Read the article

  • Cannot create a project TFS 2012 - TF218027

    - by GrandMasterFlush
    I've just installed TFS2012 and am trying to create a new project in the default collection via Visual Studio 2012 but I keep getting this error message: TF218027: The following reporting folder could not be created on the server that is running SQL Server Reporting Services: /TfsReports/DefaultCollection. The report server is located at: http://<servername>/Reports. The error is: The permissions granted to user '<domain>/grandmasterflush' are insufficient for performing this operation.. Verify that the path is correct and that you have sufficient permissions to create the folder on that server and then try again. I've checked the permissions and my user is a member of the Project Collection Administrators and the Project Collection Administrators group has the 'Create new project' permission set to allow. The only thing I think it might be is that the user that I created during installation for the Sharepoint access and reports viewing does not have permission to write to the reports folder, however if I select "Do not configure a SharePoint site at this time" then I still get the error messages. I can't find the reports folder to check the permissions either. TFS is using an instance of SQL 2012 that was already on the machine when TFS was installed. Can anyone see what I'm doing wrong please?

    Read the article

  • Cannot reactivate RAID-5 volume: The size of the plex member is invalid

    - by Ian Boyd
    We had a 3-drive Windows Server 2008 R2 RAID-5 fail (operating in redundancy mode): WDC 1 TB WDC 1 TB WDC 1 TB We removed the failed hard drive, and put a WDC 1 TB drive (that we had standing by) into the machine. When launched, Disk Manager, asked permission to "initialize" the disk as either: Master Boot Record (MBR) Guid Partition Table (GPT) We initialized the disk as GPT, converted it to dynamic, and tried to use the Repair Volume command - except it was greyed out. (which is a terrifying thing on a failed production server hosting 3 virtual servers) i tried from the diskpart command line tool. First we look for our RAID-5 volume that is in Failed Rd mode: DISKPART> list volume Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 E VMs (Raid5) NTFS RAID-5 1863 GB Failed Rd Volume 1 D DVD-ROM 0 B No Media Volume 2 System Rese NTFS Partition 100 MB Healthy System Volume 3 C NTFS Partition 1862 GB Healthy Boot There, Volume 0. Make that our active context: DISKPART> select volume 0 Volume 0 is the selected volume. Now we need to find the disk we will be repairing the volume with: DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 931 GB 0 B * Disk 1 Online 931 GB 931 GB * Disk 2 Online 1863 GB 0 B Disk 3 Online 931 GB 0 B * Disk M0 Missing 0 B 0 B * The disk with 931 GB free, Disk 1. Now we just need to repair the volume: DISKPART> repair disk=1 Virtual Disk Service error: The size of the plex member is invalid.

    Read the article

  • Networking stopped working on Ubuntu

    - by 1337Rooster
    I installed Ubuntu 10.04 through the Wubi installer (Funny, I installed it today and thought I would have gotten 10.10). I had a network connection and everything was working fine. I rebooted my coumputer a couple of times and then suddenly, I could not connect to the network and when I click the wireless/networking icon it says "Networking Disabled". I reinstalled Ubuntu and the problem went away. After a few reboots the problem returned. I have tried restarting to see if it would come back as well as a few other things listed below. Any other suggestions would be appreciated. Tried to restart networking via /etc/init.d/networking: amato@ubuntu:~$ sudo /etc/init.d/networking restart * Reconfiguring network interfaces... Ignoring unknown interface eth0=eth0. [ OK ] Tried to stop and start it: amato@ubuntu:~$ sudo /etc/init.d/networking stop * Deconfiguring network interfaces... [ OK ] amato@ubuntu:~$ amato@ubuntu:~$ sudo /etc/init.d/networking start Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service networking start Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the start(8) utility, e.g. start networking networking stop/waiting Tried start networking: amato@ubuntu:~$ start networking start: Rejected send message, 1 matched rules; type="method_call", sender=":1.58" (uid=1000 pid=2241 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) amato@ubuntu:~$ sudo start networking networking stop/waiting Tried service networking restart: amato@ubuntu:~$ service networking restart restart: Rejected send message, 1 matched rules; type="method_call", sender=":1.60" (uid=1000 pid=2248 comm="restart) interface="com.ubuntu.Upstart0_6.Job" member="Restart" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) amato@ubuntu:~$ sudo service networking restart restart: Unknown instance: Here are the contents of my /etc/network/interfaces. auto lo iface lo inet loopback I even tried to modify it to this (based on something I read, online, not sure if I was doing the right thing here). Tried everything again and no luck: auto lo eth0 iface lo inet loopback iface eth0 inet dhcp

    Read the article

  • Routing static IP traffic on a Comcast Business Class IP Gateway (SMCD3G-CCR)

    - by Jakobud
    We are in the process of replacing our firewall, which is currently the only thing connected to our Comcast Business Class modem. Comcast gives us 5 static IP addresses. Currently, all traffic to all 5 static IPs goes directly to the existing firewall. Eventually, obviously all traffic will goto the new firewall, once the old firewall is removed from the network. But in the meantime, as we will have two firewalls plugged into the same Comcast modem, I need to route certain traffic to the new firewall instead of the old one. The firewall switchover is going to be slow and gradual as I am testing it, so I can't simply unplug the existing firewall and plug in the new one. So my question is, how do I tell the modem to route all traffic that goes to a specific IP to goto the new firewall instead of the old one? I've logged into the web interface for the modem, but the available options aren't very clear. There is a 1-to-1 NAT option (which I can't seem to get the interface for it to work properly) but I also see a "Static Routing" section. I always understood Static Routing to refer to routing data within the LAN though, so I'm not sure if that's what I'm looking for or not. Keep in mind, I'm not looking to do simple port forwarding. I'm wanting 100% of traffic to certain public static IPs to go to the specified connected firewall (I'll deal with service policies there). The modem is an SMC SMCD3G-CCR and is labeled as a Comcast Business Class Business IP Gateway. Any help or direction would be greatly appreciated.

    Read the article

  • How can I set IIS Application Pool recycle times without resorting to the ugly syntax of Add-WebConfiguration?

    - by ObligatoryMoniker
    I have been scripting the configuration of our IIS 7.5 instance and through bits and pieces of other peoples scripts I have come up with a syntax that I like: $WebAppPoolUserName = "domain\user" $WebAppPoolPassword = "password" $WebAppPoolNames = @("Test","Test2") ForEach ($WebAppPoolName in $WebAppPoolNames ) { $WebAppPool = New-WebAppPool -Name $WebAppPoolName $WebAppPool.processModel.identityType = "SpecificUser" $WebAppPool.processModel.username = $WebAppPoolUserName $WebAppPool.processModel.password = $WebAppPoolPassword $WebAppPool.managedPipelineMode = "Classic" $WebAppPool.managedRuntimeVersion = "v4.0" $WebAppPool | set-item } I have seen this done a number of different ways that are less terse and I like the way this syntax of setting object properties looks compared to something like what I see on TechNet: Set-ItemProperty 'IIS:\AppPools\DemoPool' -Name recycling.periodicRestart.requests -Value 100000 One thing I haven't been able to figure out though is how to setup recycle schedules using this syntax. This command sets ApplicationPoolDefaults but is ugly: add-webconfiguration system.applicationHost/applicationPools/applicationPoolDefaults/recycling/periodicRestart/schedule -value (New-TimeSpan -h 1 -m 30) I have done this in the past through appcmd using something like the following but I would really like to do all of this through powershell: %appcmd% set apppool "BusinessUserApps" /+recycling.periodicRestart.schedule.[value='01:00:00'] I have tried: $WebAppPool.recycling.periodicRestart.schedule = (New-TimeSpan -h 1 -m 30) This has the odd effect of turning the .schedule property into a timespan until I use $WebAppPool = get-item iis:\AppPools\AppPoolName to refresh the variable. There is also $WebappPool.recycling.periodicRestart.schedule.Collection but there is no add() function on the collection and I haven't found any other way to modify it. Does anyone know of a way I can set scheduled recycle times using syntax consistent with the code I have written above?

    Read the article

  • What is my miniport's service name?

    - by Ian Boyd
    i am trying to query the physical sector size of my drive using fsutil: C:\Windows\system32>fsutil fsinfo ntfsinfo c: NTFS Volume Serial Number : 0x78cc11b2cc116c1e Version : 3.1 Number Sectors : 0x000000003a382fff Total Clusters : 0x00000000074705ff Free Clusters : 0x00000000022fc29b Total Reserved : 0x00000000000007d0 Bytes Per Sector : 512 Bytes Per Physical Sector : <Not Supported> Bytes Per Cluster : 4096 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x00000000305c0000 Mft Start Lcn : 0x00000000000c0000 Mft2 Start Lcn : 0x0000000003a382ff Mft Zone Start : 0x0000000006951940 Mft Zone End : 0x0000000006951c80 RM Identifier: 19B22CBE-570D-19DE-9C72-CD758F800DDC You can see that the Bytes Per Physical Sector value is Not Supported: Bytes Per Physical Sector : <Not Supported> In KB Article Microsoft support policy for 4K sector hard drives in Windows, Microsoft says: If fsutil.exe continues to display "Bytes Per Physical Sector : " after you apply the latest storage driver and the required hotfixes, make sure that the following registry path exists: HKLM\CurrentControlSet\Services\<miniport’s service name>\Parameters\Device\ Name: EnableQueryAccessAlignment Type: REG_DWORD Value: 1: Enable The only thing i don't know is what my Miniport's service name is. What is my miniport's service name. i know that my SATA drives are in AHCI mode, and AHCI uses the msahci driver service: Is that my miniport service? "MSAHCI"? See also Hitachi - Advanced Format Technology Brief RMPrepUSB - Advanced Format (4K sector) hard disks Microsoft support policy for 4K sector hard drives in Windows OSR Online - Advance Disk Format support in Storport Virtual Mniport diver Default cluster size for NTFS, FAT, and exFAT Wikipedia - Advanced Format

    Read the article

  • centos yum problems

    - by Malachi Soord
    I am really new to using linux and have just formatted my centos 5.2 vps and am trying to install links by using the command yum install links. But the following error gets displayed: [root@inverses ~]# yum install links Loading "fastestmirror" plugin Loading mirror speeds from cached hostfile * lxlabsupdate: download.lxlabs.com * lxlabslxupdate: download.lxlabs.com * base: ftp.nluug.nl * updates: distrib-coffee.ipsl.jussieu.fr * addons: mirror.answerstolove.com * extras: distrib-coffee.ipsl.jussieu.fr http://ftp.nluug.nl/ftp/pub/os/Linux/distr/CentOS/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://distrib-coffee.ipsl.jussieu.fr/pub/linux/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://mirror.ukhost4u.com/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosh2.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://mirror.atrpms.net/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosf.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centoso3.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosk.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosv.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosk3.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: base. Please verify its path and try again From what I gather after checking out some of the URLs to see if they exist or not is that they require a redirect from .../5.2/.. to just ../5/ Is this a common thing to have to change? and how could I change this? Here is my CentOS-Base.repo http://pastebin.com/m67c1a022

    Read the article

  • vCenter appliance won't use mail relay server

    - by Safado
    tl;dr: - sendmail is configured to use a relay server but still insists on using 127.0.01 as the relay, which results in mail not being sent. We have the open source vCenter appliance (v 5.0) managing our ESXi cluster. When connected to it via vSphere Client, you can configure the SMTP relay server to use by going to Administration > vCenter Server Settings > MAIL. There you can set the SMTP Server value. I looked through their documentation and also confirmed on the phone with support that all you have to do to configure mail is to put in the relay IP or fqdn in that box and hit OK. Well, I had done that and mail still wasn't sending. So I SSH into the server (which is SuSE) and look at /var/log/mail and it looks like it's trying to relay the email through 127.0.0.1 and it's rejecting it. So looking through the config files, I see there's /etc/sendmail.cf and /etc/mail/submit.cf. You can configure items in /etc/sysconfig/sendmail and run SuSEconfig --module sendmail to generate those to .cf files based on what's in /etc/sysconfig/sendmail. So playing around, I see that when you set the SMTP Server value in the vCenter gui, all that it does is change the "DS" line in /etc/mail/submit.cf to have DS[myrelayserver.com]. Looking on the internet, it would appear that the DS line is really the only thing you need to change in order to use a relay server. I got on the phone with VMWare support and spent 2 hours trying to modify ANY setting that had anything to do with relays and we couldn't get it to NOT use 127.0.0.1 as the relay. Just to note, any time we made any sort of configuration change, we restarted the sendmail service. Does anyone know whats going on? Have any ideas on how I can fix this?

    Read the article

  • HD Tune warning for "Reallocated Event Count" with a new/unused drive. How serious is that?

    - by Developer Art
    I've just looked at the health status of my old 2,5 inch 500 Gb Fujitsu drive with a popular "HD Tune" utility. It shows a warning for the "Reallocated Event Count" property. How serious is that? The thing is that the drive is practically new. I pulled it out of a new laptop over a year ago and never used it since. Right now it only has 53 "Power On" hours which sounds about right since I only had it running a few evenings overnight before switching it for something more performant. Does this warning indicate that the drive is likely to fail some time in the future? I'm somewhat perplexed since the drive is effectively unused. What is more, I have arranged with somebody to buy off this drive since I don't really need. It is 12,5 mm thick (with 3 plates) meaning it doesn't fit into an external enclosure which makes it quite useless to me. Can I give away the drive without having it on my conscience or better cancel the deal? In other words, can the drive be used safely for years to come or better throw it away? I'm running a sector test now to see if there are any real problems. Will post the results as soon as they're available.

    Read the article

  • Windows Vista DHCP bug, arp authorize, isc dhcp, workaround

    - by jinanwow
    I am trying to find a workaround for the Windows Vista Force Broadcast bug with ISC DHCP and a Cisco Router. The problem is not windows vista trying to obtain an IP address from us that works fine (with or without the flag enabled). THe problem is we are using a cisco router and the command 'arp authorized' to prevent users from using static IP addresses on the network. The problem is, if Windows Vista sets the boot flag to true the command 'arp authorized' will not work, as it looks for the IP address and destination MAC address in the DHCP Offer Packet to add it to its arp table. The machine will DHCP just fine, but since the ARP table is not aware of the machine, it is unable to access the internet. If I disable the broadcast flag in vista, the next time it DHCPs an arp entry gets created since the DHCP Offer is unicast instead of broadcast. The thing is, we can not tell 500 to 1000 people to edit their registry, so we need a workaround for this issue. I have not had much success in finding a workaround. The question is, is there a way to force or trick ISC DHCP into unicasting a responce back to the user. Either on the Cisco Side, ISC DHCP side or intercepting and rewriting the DHCP Discover UDP packet to turn off the flag before it reaches ISC DHCP?

    Read the article

  • linux LVM mirror vs. MD mirror

    - by sims
    I think I remember making some mirrors years ago with LVM, and I don't remember this "log" thing. Or maybe I made the mirror with mdadm and put LVM on top. That must be it. What is the LVM log for if it is just a mirror? What is stored there? What is it's purpose? Is using "--mirrorlog core" bad? What's the down side? I don't want to have to have another partition for logs if I don't have to. Any recommendations on using either technology? Even if I make the mirror with mdadm, I'll use LVM on top of that. So, in that case, maybe it's better to have the whole setup built with LVM...? Would that take more a performance hit or less? The disks are for storing Xen domU "disks". Sorry for the complex not-to-the-point "question". Ideas and suggestions and links are most welcome. Thanks!

    Read the article

  • Windows 8 DLNA streaming of MKV files

    - by dbb
    I have a Samsung TV that is capable of playing MKV files. The Windows DLNA Play To menu that appears when you right-click a media file does not support MKV files, but a simple trick has been to change the file extension from .mkv to .avi so the Play To context menu item would appear. At that point I could successfully stream from my computer to my TV. However, this does not appear to work in Windows 8. Doing the same thing in Windows 8 causes the Play To window to open but the file does not get played. Dragging and dropping the file in the Play To window causes it to be silently ignored. Using actual AVI, MP4, etc. files works, of course. It appears Windows 8 is now doing some kind of validation on the file that Windows 7 wasn't previously. The Play To window does not show any kind of obvious error message or warning and there is nothing in the Windows event log. So, is there a way in Windows 8 to stream MKV files to a DLNA device without converting it to another container format? I would rather not use extra third-party software, but I would consider it if it's purposefully designed for this simple case rather than a more robust "media library/server" solution.

    Read the article

  • Trying to run chrooted suPHP with UserDir, getting 500 server error

    - by Greg Antowski
    I've managed to get suPHP working fine with UserDir (i.e. PHP files run from the /home/$username/public_html) directory, but I can't get it to work when I chroot it to the user's home directory. I've been following this guide: http://compilefailure.blogspot.co.nz/2011/09/suphp-chroot-gotchas.html And adapting it to my needs. I'm not creating vhosts, I just want PHP scripts to be jailed to the user's home directory. I've gotten to the part where you use makejail and set up a symlink. However even with the symlink set up correctly, PHP scripts won't run. This is what's shown in the Apache error log: SoftException in Application.cpp:537: Could not execute script "/home/jimmy/public_html/test.php" [error] [client 127.0.0.1] Caused by SystemException in API_Linux.cpp:444: execve() for program "/usr/bin/php-cgi" failed: No such file or directory The thing is, if I try running either of the following commands in the terminal it works without any issues: /home/jimmy/usr/bin/php-cgi /home/jimmy/public_html/test.php /usr/bin/php-cgi /home/jimmy/public_html/test.php I've been trying for hours to get this going and documentation for this kind of stuff is almost non-existent. If anyone could help me out with this, I'd be extremely grateful.

    Read the article

  • IIS 7 - 403 Access Denied error on wwwroot

    - by cparker4486
    Hi, I'm trying to setup a redirect from http://mail.mydomain.com to https://mail.mydomain.com/owa. I've been unsuccessful in doing this by using IIS's HTTP Redirect so I looked to other options. The one I settled on is to create a default document in the wwwroot folder to handle the redirect. I created a file called index.aspx (and added index.aspx to the list of default documents) and put the following code in it: <script runat="server"> private void Page_Load(object sender, System.EventArgs e) { Response.Status = "301 Moved Permanently"; Response.AddHeader("Location","https://mail.mydomain.com/owa"); } </script> Instead of getting a redirect I get: 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I've been trying to find an answer to this but have been unsuccessful so far. One thing I did try was to add the Everyone group to wwwroot with read access. No change. The AppPool for Default Web Site is DefaultAppPool and the Identity is ApplicationPoolIdentity. (I don't know what these things are but maybe knowing this will help you.) Thanks!

    Read the article

  • Googlemail users can't email my email address

    - by Jack W-H
    Hi folks I have a GridServer account at MediaTemple. The address linked up to my MT account is [email protected]. My non-Google email address could email [email protected] just fine. But when my friend tried to email it from his gmail address, he got the following message: From: Mail Delivery Subsystem Date: Thu, Apr 15, 2010 at 12:02 PM Subject: Delivery Status Notification (Failure) To: [email protected] Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 relay not permitted (state 14). ----- Original message ----- MIME-Version: 1.0 Received: by 10.231.205.139 with HTTP; Thu, 15 Apr 2010 12:02:26 -0700 (PDT) In-Reply-To: <[email protected] References: <[email protected] Date: Thu, 15 Apr 2010 12:02:26 -0700 Received: by 10.231.169.144 with SMTP id z16mr211585iby.25.1271358147047; Thu, 15 Apr 2010 12:02:27 -0700 (PDT) Message-ID: Subject: Re: Hi Friend From: My Friend To: "[email protected]" Content-Type: multipart/alternative; boundary=0016e6d26c5abcb2a704844b22bf Does this work. Does this work. Does this work? On Thu, Apr 15, 2010 at 11:30 AM, [email protected] wrote: Hi Friend. Just testing the email address I set up for My Site. Could you please reply so I can check if it's working OK? Cheers Jack I thought it was just a fluke, but exactly the same thing happens when I use MY Gmail address that I also have. Can anyone shed some light on the problem? Jack

    Read the article

  • Account Lockout with pam_tally2 in RHEL6

    - by Aaron Copley
    I am using pam_tally2 to lockout accounts after 3 failed logins per policy, however, the connecting user does not receive the error indicating pam_tally2's action. (Via SSH.) I expect to see on the 4th attempt: Account locked due to 3 failed logins No combination of required or requisite or the order in the file seems to help. This is under Red Hat 6, and I am using /etc/pam.d/password-auth. The lockout does work as expected but the user does not receive the error described above. This causes a lot of confusion and frustration as they have no way of knowing why authentication fails when they are sure they are using the correct password. Implementation follows NSA's Guide to the Secure Conguration of Red Hat Enterprise Linux 5. (pg.45) It's my understanding that that only thing changed in PAM is that /etc/pam.d/sshd now includes /etc/pam.d/password-auth instead of system-auth. If locking out accounts after a number of incorrect login attempts is required by your security policy, implement use of pam_tally2.so. To enforce password lockout, add the following to /etc/pam.d/system-auth. First, add to the top of the auth lines: auth required pam_tally2.so deny=5 onerr=fail unlock_time=900 Second, add to the top of the account lines: account required pam_tally2.so EDIT: I get the error message by resetting pam_tally2 during one of the login attempts. user@localhost's password: (bad password) Permission denied, please try again. user@localhost's password: (bad password) Permission denied, please try again. (reset pam_tally2 from another shell) user@localhost's password: (good password) Account locked due to ... Account locked due to ... Last login: ... [user@localhost ~]$

    Read the article

  • How config nginx to serve flv video streaming with JWplayer

    - by Nisanio
    I need to serve flv files from one server, and show those videos on a web page located in another server, with JWPlayer. I already config nginx with the flv module, an put this on nginx.conf location ~ \.flv$ { flv; } The code I use in the jwplayer is <object id="player" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" name="player" width="328" height="200"> <param name="movie" value="player.swf" /> <param name="allowfullscreen" value="true" /> <param name="allowscriptaccess" value="always" /> <param name="flashvars" value="file=http://XX.XX.XX.XX/vid5.flv&image=preview2.jpg" /> <embed type="application/x-shockwave-flash" id="player2" name="player2" src="player.swf" width="630" height="385" allowscriptaccess="always" allowfullscreen="true" flashvars="file=http://XX.XX.XX.XX/vid5.flv&image=preview2.jpg" /> </object> Where XX.XX.XX.XX is the ip address of the server (we'll config an appropiate domain, but first i have to make the whole thing works :) ) The problem is that nothing happen. I don't know what to do next, all the articles on the internet only talks about how compile the flv module (already done) and add the nginx.conf lines. Any help would be greatly appreciate Thanks in advance

    Read the article

  • API numbers don't match on compiled PHP extension

    - by tixrus
    I'm trying to get GD into my PHP. I recently installed PHP5.3.0 on my system running Mac Leopard using mac ports. It did not come with the gd module. So I downloaded gd, compiled it as an extension module as per http://www.kenior.ch/macintosh/adding-gd-library-for-mac-os-x-leopard, made php.ini point to it, restarted apache etc. But no GD. So in apache error log it says PHP Warning: PHP Startup: gd: Unable to initialize module\nModule compiled with module API=20060613\nPHP compiled with module API=20090115\nThese options need to match\n in Unknown on line 0 So a bit of googling says I should not use the phpize I have before configuring and making these. I should use a new one called phpize5. I surely don't have any such thing. Unless its packed up inside something else in my php5.3. distro. Where do you get it. In Ubuntu I could just run sudo apt-get install php-dev, (apparently) and it would just appear by magic. At least that's what the webpage said. Unfortunately I am running MacOSX version Leopard. How can I build this GD module on Leopard so that it will match the API number in my PHP?

    Read the article

  • Having trouble redirecting frevvo using mod_proxy

    - by user38859
    This question is similar to this: http://serverfault.com/questions/102868/how-to-access-webservers-running-on-ports-blocked-on-companys-network Basically, I'm using confluence and a plugin called frevvo. Confluence sits on port 8080 while frevvo sits on port 8082. I want to redirect both of them to port 80 via Apache HTTP web server so that it doesn't get blocked by company proxies. I've been using the document on Atlassian that shows me how to run confluence behind Apache (I can't post a second URL due to being a newbie here) I've successfully redirected Confluence from port 8080 to port 80 so I can now access Confluence using www.example.com/confluence. Now I tried doing the same thing to frevvo with the following configurations: Apache httpd: ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /confluence http://localhost:8080/confluence ProxyPassReverse /confluence http://localhost:8080/confluence <Location /confluence> Order allow,deny Allow from all </Location> ProxyPass /frevvo http://localhost:8082/ ProxyPassReverse /frevvo http://localhost:8082/ <Location /forms> Order allow,deny Allow from all </Location> And in server.xml for the frevvo Tomcat instance, I added the following within <Host> tag: <Context path=" " docBase="" debug="0" reloadable="false"> <!-- Logger is deprecated in Tomcat 5.5. Logging configuration for Confluence is specified in confluence/WEB-INF/classes/log4j.properties --> <Manager pathname="" /> </Context> The plugin, frevvo, when accessed through the browser using http://localhost:8082 usually redirect to http://localhost:8082/frevvo/web With the above configuration, when accessing www.example.com.au/frevvo redirects to www.example.com/frevvo/web/static/login - which doesn't work. I hope the above details is clear and appreciate anyone who could give us some insight.

    Read the article

  • ActiveSync devices causing accounts to lockout

    - by Abdullah
    When a user changes his account password for whatever reason (read: expired), and the old password is stored in his mobile device connected through EAS. This will cause his account almost immediately - as it should according to the lockout policy defined in the AD. It was easy to figure out that part. The hard part is keeping it from happening. I looked everywhere. Nothing. Basically there are four parts to the puzzle: the EAS device, the TMG (ISA) server, the EAS protocol and finally the AD. None of them have a way to stop the EAS device from failing to authenticate. So I figured I'll have to come up with a clever workaround. And the only thing I could come up with is to create a group for all EAS users and exclude them from the lockout policy, which obviously defeats the whole purpose of the policy, or to educate the users to update their devices with the new passwords, which is impossible. The question: Can you think of any other way to prevent EAS from locking out the accounts? Environment: Mostly iOS devices all through EAS. TMG 2010. Exchange 2007. AD 2008 R2.

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >