Search Results

Search found 19148 results on 766 pages for 'dmg image'.

Page 720/766 | < Previous Page | 716 717 718 719 720 721 722 723 724 725 726 727  | Next Page >

  • EMC/Legato/Networker Failed to recover files : Cross Platform Recovery not supported.

    - by marc.riera
    Software used to backup: EMC / Legato Networker legato server : windows legato clients: same hardware (2 years ago fedora something , now ubuntu ) Trying to recover from an old client, which is no longer available. So this is the thing. On 07/20/2008 we backed up a samba server(fedora something) to a tape , setting 1 year as browse policy and retention policy. Now this tape is recyclable. We took down the dns name. We deleted the legato client configuration. That legato client was reinstalled and is doing other stuff on ubuntu 10.04, with a different name but same ip. Now, 2 years and some month later #### Now we need to recover a folder from 2008 backup, on the fedora-samba-server. First thing, legato does not show the client name because the config was deleted. We create it again. We just set the old dns back on track, pointing the same ip, where the old server was, same MAC address ;). We created a new 'old client configuration' pointing to the new server. (different legato ip for client "I suppose" ) The ssid where the needed folder is on 2 tapes, 20 and 22. The index for that backup is on tape 21. We put this tapes on the jukebox (IBMT4000) -- not important for the issue -- All three tapes expired its browsable and recoverable time. So they are on recyclable. We get the clone id from the ssid with following command: mminfo -avot -q "ssid=<ssid>" -r cloneid We set the tapes to notrecyclable nsrmm -S <ssid>/<cloneid> -o notrecyclable We change the retention for the tapes for a future date nsrmm -S <ssid> -e 01/20/2011 We check the dates are correct : mminf -avV -q "ssid=<ssid>" -r ssbrowse(26),ssretent(26),savetime So far its OK. We close the terminal. Restart the server, just for being sure. Finally, we recover the index for that ssid where the folder should be. nsrck -L7 -t "07/20/2008" oldservername.domain.org There, we open the Networker User, select the server, select the old client as source, select the new client as destination. And this is what I get. imgur image of output -- http://i.imgur.com/1nOr8.png Should I understand that I need to install whatsoever operating system that was running on the old "linux server"/"networker client" to be able to restore 26Mb of files? thanks

    Read the article

  • Use an audio/video file from a Linux laptop via USB to be played by Magic Sing ET-23H

    - by AisIceEyes
    I am one of the technical directors of a regular karaoke contest event. For the karaoke contest itself, due to tight budget, we are using what one of the sponsors are providing - Magic Sing ET-23H . The video output of the Magic Sing ET-23H are broadcasted at two big screens that are being shown to the audience and event attendees. When a karaoke contestant provides his / her karaoke video, the video itself is in a readable USB flashdrive and is attached to the USB input of Magic Sing ET-23H. What really bugs me is that the interface of Magic Sing ET-23H are also being broadcasted at the big screen video feeds. The interface of choosing the video file is being seen in the Magic Sing ET-23H - also to the big video screens that are seen by the audience and event goers. I will post in the comments ( if my less than 10 reputation would allow me) the picture of Magic Sing ET-23KH USB input of the device. I always bring my laptop, Acer AS5742-7653, during the regular karaoke event. I'm using my laptop also for tallying of scores from the judges, and also playing audio files from contestants that did not provide a karaoke video. I personally am using different Linux distros, but I next to all the time use my Ubuntu Studio 12.04.3 64bit partition during the regular karaoke contest event. My question is this: Is there a way I can share a temporary video/audio file directly from the laptop I'm using, going to the Magic Sing ET-23H that can broadcast both the video/audio file? Just like how in Window's Avisynth AVS files, or VirtualDub's temporary avi file, or like using ffplay (of ffmpeg), etc. I have researched somewhat the matter and found links in SuperUser.com. Though I can only provide the links at the comments section of this post if my reputation of less than 10 would allow me. I have a hunch it is possible, but I have not fully understood the device being used at the event, Magic Sing ET-23H, if there are other ways for it to broadcast video and audio files besides its USB input. Any help to my current predicament is highly appreciated. Thank you. PS: Since I need at least 10 reputation to post more than 2 links and also post images, I will try to post the image & links at the comments (if my below 10 reputation would allow me).

    Read the article

  • Cannot Send Item error in Outlook - permissions to registry?

    - by Tim Alexander
    The issue I am trying to solve is to do with users getting a Cannot Send Item error in Outlook 2007 connecting to Exchange 2007. Basically if there is an image in the email (either one they have pasted in or one from another email in the chain) they get a "Cannot Send Item" error. Initially thought it was a citrix issue but users get it when they RDP to a server as well. Changing the message to Rich Text works 80% of the time but I do not think this is a solution but more of a temporary workaround. After some troubleshooting we found that the error can be fixed by adding the user as a member of the local power users group. of course this is not really a fix. My thoughts were that the ability of a power user to add/remove software may give them more access to the registry which might allow them to get round a restriction that is in place for a normal user. I have tried going through a procmon but the wealth of information is confusing. It initially looked like it may be an Outlook 2007 email security setting but this does not change between power user and normal user (set to 1 in the registry, "Use the security setting from Outlook Security Settings Public Folders"). I am struggling to fine tune my troubleshooting to work out exactly what is blocking it. Has anyone had an experience with an error similar to this? Or are there any tips for trying to track down issues via procmon as I must admit my approach seems somewhat lacking :) EDIT: So I have trawled through the two logs we have from process monitor (one as a power user and one a normal user). annoyingly I can find no obvious difference where something is denied access. There are more access denied events in the normal user log but these are quickly followed by sucessful entries to the same path fractions of a second later. The only thing that does stand out is an access denied to HKCR.html. This does not even appear in the power user version of the log. From what I understand this helps determine the default browser which ties in nicely with the fact that 9 out of 10 times you can send the message as Rich Text. EDIT: Looks like KB2509470 was causing the issue. Not really sure why but when I can work out what it does and why it causes the problem will post here unless anyone beats me to it!

    Read the article

  • Random Computer Crashes

    - by Josh W.
    Ok, here's a wierd one for you all. Occasionally my PC here at work will crash in a very peculiar way. My dual monitors will suddenly go blank as if there is no longer a video signal, the USB mouse light will go dark and mouse stays unresponsive, the keyboard lights will not change status when the appropriate keys are pressed (Num/Caps/Scoll Lock). The CD Tray WILL open and close. But the computer will not respond to a ping request. For all intents & purposes it's as if the computer is off, except it wasn't intentional by me. The power light and internal fans are still on and I've now lost any unsaved work. Now here's where it gets wierd. This PC is part of a batch of PC's we got from a local vendor who does our initial system builds. Mine, and 6 other co-workers PCs all have the same issue. Originally we thought it was a bad combination of hardware, but through trial and error the only thing we haven't eliminated are the OS, Mobo & CPU. The problem was so bad for some of them that they ended up going back to their 5 year old dinosaurs in order to get some work done, for me the problem isn't as bad, maybe once every other day or so, but still enough to bite me in the ass if I've forgotten my ritualistic pressing of CTRL-S every 1-5 minutes. In this case we've tried two different video cards, two different power supplies, two different memory configurations, running on a UPS/not on UPS, updating/rolling back video drivers, three different bios revisions. The only things we haven't swapped are the mobo & cpu, mainly because a new mobo means a new Hardware Abstraction Layer, ie re-install of windows and there's alot of other software on this PC that takes forever to reload by hand. There was a base image that our systems team created with all the drivers installed and the basic setup of software our company uses, but they then must customize the setup for us programmers so it takes a while to get a new configuration up and going. I'm a programmer by day and am usually pretty good at diagnosing computer problems whether through trial and error or not. We've pretty much exhausted all the ideas we can think of here, short of a new mobo/cpu. Was hoping someone out there might have anything else we can try.. Relevant Parts: OS: XP Pro 32-bit Motherboard: Intel DG41RQ CPU: Intel Core-2 Quad Q9400 @ 2.66GHz Current BIOS Version/Date Intel Corp. RQG4110H.86A.0014.2010.0306.1151, 3/6/2010 Dual LCD's, Viewsonic VG930m & Samsung SyncMaster 910v (other people have different models, but listed in case there's some very wierd problem with the signals being sent/received) PS/2 Keyboard USB Microsoft Intellimouse BIOS Versions Tried: R 0013 12/23/2009 R 0014 3/6/2010 Video cards Tried GeForce 8400 GS Radeon HD 4350 - ASUS EAH4350 Two Different Power Supplies a 380W & 550W Ram Configurations 2GB - 1 x 2GB 4GB - 2 x 2GB

    Read the article

  • Cobbler 2.2.2 problems

    - by Peter
    I have setup a dedicated LAN for Cobbler tests. My setup is: Cobbler server: openSUSE 12.3, cobbler 2.2.2 (from openSUSE repos) Imported distros: Centos 6.5, Red Hat 6.5, Red Hat 7.0, openSUSE 13.1 Target Machine: VMs in a Windows 7 Virtualbox Systems provisioning works OK, but I have some problems. The first one is that cobbler does not honor the "pxe_just_once: 1" setting. When the setup of the target OS is finished, after the reboot the target systems continues to PXE boot! The second problem is that the target server is not correctly configured! See my setup: cobbler system report --name=test Name : test TFTP Boot Files : {} Comment : Fetchable Files : {} Gateway : 192.168.0.1 Hostname : testcob1.example.com Image : IPv6 Autoconfiguration : False IPv6 Default Device : Kernel Options : {} Kernel Options (Post Install) : {} Kickstart : <<inherit>> Kickstart Metadata : {} LDAP Enabled : False LDAP Management Type : authconfig Management Classes : [] Management Parameters : <<inherit>> Monit Enabled : False Name Servers : ['192.168.0.1', '8.8.8.8'] Name Servers Search Path : [] Netboot Enabled : False Owners : ['admin'] Power Management Address : Power ID : Power Password : Power Management Type : ipmitool Power Username : Profile : RHEL-6.5-x86_64 Proxy : <<inherit>> Red Hat Management Key : <<inherit>> Red Hat Management Server : <<inherit>> Repos Enabled : False Server Override : <<inherit>> Status : testing Template Files : {} Virt Auto Boot : <<inherit>> Virt CPUs : <<inherit>> Virt Disk Driver Type : <<inherit>> Virt File Size(GB) : <<inherit>> Virt Path : <<inherit>> Virt RAM (MB) : <<inherit>> Virt Type : <<inherit>> Interface ===== : eth0 Bonding Opts : Bridge Opts : DHCP Tag : DNS Name : Master Interface : Interface Type : IP Address : 192.168.0.200 IPv6 Address : IPv6 Default Gateway : IPv6 MTU : IPv6 Secondaries : [] IPv6 Static Routes : [] MAC Address : Management Interface : True MTU : Subnet Mask : 255.255.255.0 Static : True Static Routes : [] Virt Bridge : So, although I have setup the hostname and the network interface of the target system, after the setup, the hostname is set to localhost.localdomain and eth0 is configured as a DHCP not static! How can I find the problem and fix it? Note that I have synced and restarted cobbler a couple of times, but the problems persists.

    Read the article

  • Second HDD not seen by Windows 7 on Dell Xps l501x

    - by George
    I have a Dell XPS Laptop (l501x). I have replaced the original Seagate 500GB hard drive with an SSD Intel 320 120GB when I first purchased it a year ago. It's been working great. The laptop is booting in about 23 seconds, so the SSD is great. I have an Acronis image created that I came back to every three months just to keep everything clean. The SSD is partitioned with one logical drive for my data. So, recently I thought since I am not using my optical drive often enough to swap it out with a HDD caddy and add my seagate 500gb hard drive. I ordered the caddy placed the HDD in it and now load Windows. It just hangs on the screen that should show the Windows logo. I have tried everything that I know and searched it online. I have uninstalled the SATA controller AHCI and let Windows install it. Still will not boot into windows. I wanted to mention that the Seagate 500GB drive was the one that came with my laptop before I switched to the Intel SSD. As you know Intel has this application called Intel Rapid Technology which loads once in a while and shows the second hard drive, but then, when I restart it hangs again and Windows will not load. As soon as I remove the HDD Caddy and restart it loads Windows fine. I also formated the Seagate 500GB HDD in NTFS and still Windows will not load. When I go into the BIOS it shows the Fixed SSD and also "Sata ODD 500GB" instead of the optical drive but it will not boot into Windows when the HDD caddy is present. There is nothing wrong with the caddy. I have another laptop (Asus) and I installed the HDD caddy and Windows 7 loads without any glitch. I don't get it. I have also flashed the BIOS because Dell had a new version (A08). I also wanted to add that I refreshed Disk Management and the Device Manager and the second drive does not display. At this point I think it's a Windows issue so before I reinstall Windows 7 Home Premium from scratch I wanted to see if there was anything I was missing. Any advice would be greatly appreciated.

    Read the article

  • Why won't my Windows 8 Command line update its path

    - by mawcsco
    I needed to add a new entry to my PATH variable. This is a common activity for me in my job, but I've recently started using Windows 8. I assumed the process would be similar to Windows 7, Vista, XP... Here's my sequence of events: Open System properties (Start- [type "Control Panel"] - Control Panel\System and Security\System - Advanced system settings - Environment Variables) Add the new path to beginning of my USER PATH variable (C:\dev\Java\apache-ant-1.8.4\bin;) Opened a command prompt (Start - [type "command prompt" enter] - [type "path" enter] My new path entry is not available (see attached image and vide). I Duplicated the exact same process on a Windows 7 machine and it worked. EDIT Windows 8 Environment Variables and Command Prompt video EDIT This is definitely not the behavior of Windows 7. Watch this video to see the behavior I expect working in Windows 7. http://youtu.be/95JXY5X0fII EDIT 5/31/2013 So, after much frustration, I wrote a small C# app to test the WM_SETTINGCHANGE event. This code receives the event in both Windows 7 and Windows 8. However, in Windows 8 on my system, I do not get the correct path; but, I do in Windows 7. This could not be reproduced in other Windows 8 systems. Here is the C# code. using System; using Microsoft.Win32; public sealed class App { static void Main() { SystemEvents.UserPreferenceChanging += new UserPreferenceChangingEventHandler(OnUserPreferenceChanging); Console.WriteLine("Waiting for system events."); Console.WriteLine("Press <Enter> to exit."); Console.ReadLine(); } static void OnUserPreferenceChanging(object sender, UserPreferenceChangingEventArgs e) { Console.WriteLine("The user preference is changing. Category={0}", e.Category); Console.WriteLine("path={0}", System.Environment.GetEnvironmentVariable("PATH")); } } OnUserPreferenceChanging is equivalent to WM_SETTINGCHANGE C# program running in Windows 7 (you can see the event come through and it picks up the correct path). C# program running in Windows 8 (you can see the event come through, but the wrong path). There is something about my environment that is precipitating this problem. However, is this a Windows 8 bug?

    Read the article

  • APC PHP cache size does not exceed 32MB, even though settings allow for more

    - by hardy101
    I am setting up APC (v 3.1.9) on a high-traffic WordPress installation on CentOS 6.0 64 bit. I have figured out many of the quirks with APC, but something is still not quite right. No matter what settings I change, APC never actually caches more than 32MB. I'm trying to bump it up to 256 MB. 32MB is a default amount for apc.shm_size, so I am wondering if it's stuck there somehow. I have run the following echo '2147483648' > /proc/sys/kernel/shmmax to increase my system's shared memory to 2G (half of my 4G box). Then ran ipcs -lm which returns ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 2097152 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Also made a change in /etc/sysctl.conf then ran sysctl -p to make the settings stick on the server. Rebooted, too, for good measure. In my APC settings, I have mmap enabled (which happens by default in recent versions of APC). php.ini looks like: apc.stat=0 apc.shm_size="256M" apc.max_file_size="10M" apc.mmap_file_mask="/tmp/apc.XXXXXX" apc.ttl="7200" I am aware that mmap mode will ignore references to apc.shm_segments, so I have left it out with default 1. phpinfo() indicates the following about APC: Version 3.1.9 APC Debugging Disabled MMAP Support Enabled MMAP File Mask /tmp/apc.bPS7rB Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Oct 11 2011 22:55:02 Directive Local Value apc.cache_by_default On apc.canonicalize O apc.coredump_unmap Off apc.enable_cli Off apc.enabled On On apc.file_md5 Off apc.file_update_protection 2 apc.filters no value apc.gc_ttl 3600 apc.include_once_override Off apc.lazy_classes Off apc.lazy_functions Off apc.max_file_size 10M apc.mmap_file_mask /tmp/apc.bPS7rB apc.num_files_hint 1000 apc.preload_path no value apc.report_autofilter Off apc.rfc1867 Off apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 256M apc.slam_defense On apc.stat Off apc.stat_ctime Off apc.ttl 7200 apc.use_request_time On apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock On apc.php reveals the following graph, no matter how long the server runs (cache size fluctuates and hovers at just under 32MB. See image http://i.stack.imgur.com/2bwMa.png You can see that the cache is trying to allocate 256MB, but the brown piece of the pie keeps getting recycled at 32MB. This is confirmed as refreshing the apc.php page shows cached file counts that move up and down (implying that the cache is not holding onto all of its files). Does anyone have an idea of how to get APC to use more than 32 MB for its cache size?? **Note that the identical behavior occurs for eaccelerator, xcache, and APC. I read here: http://www.litespeedtech.com/support/forum/archive/index.php/t-5072.html that suEXEC could cause this problem.

    Read the article

  • Fix single entry from mbr

    - by Sander
    I use EasyBCD to manage my tripleboot of (1) Windows Server 2008 R2, (2) Windows 7 Professional and (3) Ubuntu Linux. While trying to change the order of my boot menu I ended up losing the Windows Server entry. Luckily I had a boot menu backup (.bcd file) that allowed me to restore my boot menu using EasyBCD. However, when I now select the Windows Server option in my boot menu the Windows Server Recovery Environment starts up. So I have to select language/keyboard layout/etc. and then I have 3 options as shown in the image below. . My goal is to fix the one corrupted Windows Server entry from my boot menu without messing up or losing the two other ones. I'm guessing the Recovery Console (Command Prompt) is the next step and that I will be needing bootrec.exe. But when consulting this page: Use the Bootrec.exe tool in the Windows Recovery Environment to troubleshoot and repair startup issues in Windows (about half way down there's a link that shows the bootrec.exe options) I'm getting uncertain. The page lists 4 options for bootrec.exe : /FixMbr /FixBoot /ScanOs /RebuildBcd What option do I need to fix just the server entry of my boot menu? Thanks in advance, Sander P.S. All three OS's are on the same physical disk (3 different partitions). Disk layout: System reserved (primary partition, 100 MB) Windows 7 (primary parition, 150 GB) Windows Server 2008 (primary partition, 150 GB) Extended partition (linux partitions (/,/swap,/home), 150GB + data partition, 150 GB) P.P.S. This is what my boot menu looks like using EasyBCD (Detailed/Debug mode) on my Windows 7 installation. Windows Boot Manager -------------------- identifier {9dea862c-5cdd-4e70-acc1-f32b344d4795} device partition=\Device\HarddiskVolume1 description Windows Boot Manager locale en-US inherit {7ea2e1ac-2e61-4728-aaa3-896d9d0a9f0e} default {93f90e43-cae8-11df-b05a-c9177e705936} resumeobject {93f90e3e-cae8-11df-b05a-c9177e705936} displayorder {93f90e43-cae8-11df-b05a-c9177e705936} {93f90e3f-cae8-11df-b05a-c9177e705936} {93f90e46-cae8-11df-b05a-c9177e705936} toolsdisplayorder {b2721d73-1db4-4c62-bf78-c548a880142d} timeout 10 displaybootmenu Yes Windows Boot Loader ------------------- identifier {93f90e43-cae8-11df-b05a-c9177e705936} device partition=\Device\HarddiskVolume3 path \Windows\system32\winload.exe description Windows Server 2008 R2 - Standard locale en-US inherit {6efb52bf-1766-41db-a6b3-0ee5eff72bd7} recoverysequence {93f90e44-cae8-11df-b05a-c9177e705936} recoveryenabled Yes osdevice partition=\Device\HarddiskVolume3 systemroot \Windows resumeobject {93f90e42-cae8-11df-b05a-c9177e705936} nx OptOut Windows Boot Loader ------------------- identifier {93f90e3f-cae8-11df-b05a-c9177e705936} device partition=C: path \Windows\system32\winload.exe description Windows 7 - Professional locale nl-NL inherit {6efb52bf-1766-41db-a6b3-0ee5eff72bd7} recoverysequence {93f90e40-cae8-11df-b05a-c9177e705936} recoveryenabled Yes osdevice partition=C: systemroot \Windows resumeobject {93f90e3e-cae8-11df-b05a-c9177e705936} nx OptIn Real-mode Boot Sector --------------------- identifier {93f90e46-cae8-11df-b05a-c9177e705936} device partition=C: path \NST\AutoNeoGrub0.mbr description Ubuntu 10.04 - Lucid Lynx

    Read the article

  • Htaccess strange behaviour with Nginx

    - by Termos
    I have a site running on Nginx (v1.0.14) serving as reverse proxy which proxies requests to Apache (v2.2.19). So Nginx runs on port 80, Apache is on 8080. Overall site works fine except that i cannot block access to certain directories with .htaccess file. For example i have 'my-protected-directory' on 'www.site.com' Inside it i have htaccess with following code: <Files *> order deny,allow deny from all allow from 1.2.3.4 <--- my ip address here </Files> When i try to access this page with my ip (1.2.3.4) i get 404 error which is not what i expect: http://www.site.com/my-protected-directory However everything works as expected when this page is served directly by Apache. I can see this page, everyone else can't. http://www.site.com:8080/my-protected-directory Update. Nginx config (7.1.3.7 is site ip.): user apache; worker_processes 4; error_log logs/error.log; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; keepalive_timeout 65; gzip on; gzip_min_length 1024; gzip_http_version 1.1; gzip_proxied any; gzip_comp_level 5; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon; server { listen 80; server_name www.site.com site.com 7.1.3.7; access_log logs/host.access.log main; # serve static files location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/www.site.com/httpdocs; proxy_set_header Range ""; expires 30d; } # pass requests for dynamic content to Apache location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Range ""; proxy_pass http://7.1.3.7:8080; } } Could please anyone tell me what is wrong and how this can be fixed ?

    Read the article

  • Reliable access to Internet but not local network (not DNS or proxy issues)

    - by Ian Goldby
    I'm looking for help with a Vista Home Premium laptop that has trouble accessing any resource on our home network, but accesses the Internet just fine. The set-up is this: The Vista laptop and a MacBook Pro connect wirelessly to the router-modem. A Synology DS212j NAS drive has a wired connection to the router-modem. Devices on the local network are always referred to by IP address, so this cannot be a DNS issue. The MacBook Pro connects reliably to the NA via AFP (network shared folders), SMB (network shared folders) and HTTP. The Vista laptop connects to and browses sites on the Internet without any problems. It can log into the NAS via SMB and list the shared folders (so there is nothing wrong with the log-in credentials), but when it tries to open any of the folders Explorer just hangs with the spinning cursor for several minutes and then says "\192.168.1.64\shared\Photos is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. The specified network name is no longer available." It can ping the NAS successfully. If I try to open the NAS drive's web interface, the browser just hangs. This is the same with IE, Firefox and Chrome. (There is no proxy.) I can log into the NAS drive with FTP and navigate directories, but when I try to list the contents of a directory with more than a handful of entries, the ftp client hangs. I set up a website on the MacBook. The Vista laptop was able to load some of the pages, but loading any of the images was very hit and miss. Images embedded in HTML pages never worked no matter how many times I reloaded the page, but when I linked directly to the image it did load (though several attempts were sometimes needed). I tried all of this with the Windows Firewall turned off, and with AVG turned off. That made no difference. I'd really appreciate any suggestions anyone can make. The fact that the Vista laptop has trouble with HTTP and FTP as well as SMB connections suggests to me that this is a problem at the TCP level or below. But don't forget it accesses sites outside the LAN with no problems.

    Read the article

  • A name was started with an invalid character. Error processing resource

    - by Gallen
    Here is the exact error I'm getting when I try to launch my default.aspx file from the published folder. Can anybody point me in the right direction? The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. -------------------------------------------------------------------------------- A name was started with an invalid character. Error processing resource 'file:///C:/inetpub/wwwroot/MHNProServices/Default.... <%@ Page Title="" Language="C#" MasterPageFile="~/ProServices.Master" AutoEventWireup="true" CodeBehind="Default.aspx.cs"... Here are the contents of default.aspx <%@ Page Title="" Language="C#" MasterPageFile="~/ProServices.Master" AutoEventWireup="False" CodeBehind="Default.aspx.cs" Inherits="MHNProServices.Default" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server"> <link type="text/css" href="css/Default.css" rel="Stylesheet" /> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <div id="contentHead"> <img src="css/img/heading_landing.jpg" /> </div> <div id="contentTop"></div> <div id="content"> <div id="contentLeft"> <asp:Image ID="displayPicture" runat="server" /> <img id="displayOverlay"src="css/img/profilepicture_overlay.gif" /> <a id="contentButton_makeAppointment" href="Appointments.aspx?step=start"></a> <a id="contentButton_cancelAppointment" href="Appointments.aspx?step=cancel"></a> </div> <div id="contentRight"> <h3><asp:Label ID="lbl_homepageHeader" runat="server" Text=""></asp:Label></h3> <hr /> <asp:Label ID="lbl_homepageContent" runat="server" Text=""></asp:Label> </div> </div> <div id="contentBottom"></div> </asp:Content>

    Read the article

  • Repeated requests on our server?

    - by pitty.platsch
    I encountered something strange in the access log of our Apache server which I cannot explain. Requests for webpages that I or my colleagues do from the office's Windows network get repeated by another IP (that we don't know) a couple of seconds later. The user agent repeating our requests is Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2) Has anyone an idea? Update: I've got some more information now. The referrer of the replicate is set to the URL I requested before and it's not the exact same request as the protocol version is changed from 'HTTP/1.1' to 'HTTP/1.0'. The IP is not just one, it's just one of a subnet (80.40.134.*). It's just the first request to a resource that's get repeated, so it seems the "spy" is building up some kind of cache of visited places. The repeater is also picky. I tried randomly URLs with different HTTP status codes and different file patterns. 301s and 200s are redone, 404s not. Image extensions seem to be ignored. While doing my tests I discovered that this behavior seems to be common as I found other clients visiting just after the first requests: 66.249.73.184 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 10952 "-" "Mediapartners-Google" 50.17.125.180 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 41312 "-" "Mozilla/5.0 (compatible; proximic; +http://www.proximic.com/info/spider.php)" I wasn't aware about this practice, so I don't see it that much as a threat anymore. I still want to find out who this is, so any further help is appreciated. I'll try later if this also happens if I query some other server where I have access to the access logs and will update here then.

    Read the article

  • Pushing Large Files to 500+ Computers [closed]

    - by WMIF
    I work with a team to manage 500-600 rented Windows 7 computers for an annual conference. We have a large amount of data that needs to be synced to these computers, up to 1 TiB. The computers are divided into rooms and connected through unmanaged gigabit switches. We prepare these computers ahead of time with the Windows installation and configuration, plus any files that we have available to us before we send the base image in for replication by the rental company. Every year, we have presenters approach on site with up to gigs of data that need to be pushed to the room that they will be presenting in. Sometimes they only have a few files that are small sizes, such as a slide PDF, but can sometimes be much larger 5 GiB. Our current strategy for pushing these files is using batch scripts and RoboCopy. For the large pushes, we actually use a BitTorrent client to generate a torrent file, and then we use the batch-RoboCopy to push the torrent into a folder on the remote machines that is being monitored by an installed BT client. Often times, this data needs to be pushed immediately with a small time window. We have several machines in a control room that are identical to the machines on the floor that we use for these pushes. We occasionally have a need to execute a program on the remote machines, and we currently use batch and PSexec to handle this task. We would love to be able to respond to these last minute pushes with "sorry, your own fault", but it won't happen. The BT method has allowed us to have a much faster response time, but the whole batch process can get messy when there are multiple jobs being pushed. We use Enterprise Ghost for other processes, and it doesn't work well in this large of scale, plus it is really quite expensive for a once-a-year task like this. EDIT: There is a hard requirement that the remote machines on the floor are running Windows. The control machines do not have a hard OS requirement. I would really like to stay away from Multicast because of complications with upstream routers. Is Multicast or BitTorrent the better way to go on this? Is there another protocol that might work better?

    Read the article

  • Achieving A 1600 x 900 Resolution For (Guest OS) Ubuntu Under (Host OS) Windows 7

    - by panamack
    I am running Ubuntu 11.10 as a guest OS using VirtualBox 4.1.16 installed on Windows 7 Ultimate. On my laptop I'd like to be able to run Ubuntu in full screen mode at 1600 x 900. I only have options within the virtual machine to select 4:3 display settings such as 1600 x 1200, 1440 x 1050 etc. I have guest additions installed. At the windows command prompt, I tried typing: VBoxManage setextradata "Virtual Ubuntu Coursera ESSAAS" "CustomVideoMode1" "1600x900x16" This didn't work, still no 1600 x 900 res available in Ubuntu. I tried this having read the following section of the VirtualBox help (this also says something about a 'video mode hint feature' not sure what this means): 9.7. Advanced display configuration 9.7.1. Custom VESA resolutions Apart from the standard VESA resolutions, the VirtualBox VESA BIOS allows you to add up to 16 custom video modes which will be reported to the guest operating system. When using Windows guests with the VirtualBox Guest Additions, a custom graphics driver will be used instead of the fallback VESA solution so this information does not apply. Additional video modes can be configured for each VM using the extra data facility. The extra data key is called CustomVideoMode with x being a number from 1 to 16. Please note that modes will be read from 1 until either the following number is not defined or 16 is reached. The following example adds a video mode that corresponds to the native display resolution of many notebook computers: VBoxManage setextradata "VM name" "CustomVideoMode1" "1400x1050x16" The VESA mode IDs for custom video modes start at 0x160. In order to use the above defined custom video mode, the following command line has be supplied to Linux: vga = 0x200 | 0x160 vga = 864 For guest operating systems with VirtualBox Guest Additions, a custom video mode can be set using the video mode hint feature. UPDATE 02.06.12 I've just tried creating a new virtual machine using the same original disk image I had been given. This had Guest Additions v 4.1.6 installed and provided me with the 1600 x 900 full screen display I want. It's after I then install Guest Additions v 4.1.16 (the version included with my VirtualBox installation) that my only choices are 4:3 displays e.g. 1600 x 1200. Seems this is the cause.

    Read the article

  • Successfully concatenating multiple videos

    - by wiseguydigital
    My mission is to create videos out of old web slideshows. To start with I have jpegs and audio files that worked as Flash slideshows in an old system, structured such as this: Audio structure my_audio_1.mp3 (this file is a 3 second mp3 of silence) my_audio_2.mp3 my_audio_3.mp3 my_audio_4 etc... roughly 30 mp3s per slideshow Image structure my_image_1.jpg (this acts as the opening slide) my_image_2.jpg my_image_3.jpg my_image_4. etc... roughly 30 images per slideshow. As there are almost 100 slideshows that must be converted to video, I have created a web-based interface using PHP to automate the process, that sits on a local system and attempts to combine the files using shell_exec(). The process uses the following workflow: Loop through each slide and make an avi or mpeg. So for instance my_mini_video_2.avi would be a video that consists of my_image_2.jpg and has a soundtrack of my_audio_2.mp3. This slide would last the length of my_audio_2.mp3. Join / stitch / concat all of the mini videos to create the final video (Using a combination of cat and either mencoder or ffmpeg (I have also tried avimerge but to no avail). Transcode the new 'master' video to various formats such as flv etc. I thought this would be simple and have been close on many occasions but it still won't work. I can't get past stage 2 as I can't get a perfect 'master' video. I have now experimented with Mencoder, FFMpeg and seem to have been through every combination I can think of. The problem is that the audio and visuals never sync, no matter what I try. Also, I have even tried created audio-less mini videos, joining the MP3s into one long MP3 using both cat and mp3wrap and then assigning the new long MP3 as the audio track, but this always produces either a very short file or a badly slowed down file and makes the female voiceover sound like a male boxer!!! There appears to be no problems at all with the original files. Does anybody have any experience in producing a video successfully from the same kind of starting point? Or any ideas on what I may be doing wrong? As an example: If I create silent mini-videos, and stitch them together into 'temp-master.mpg' and then join the MP3s together into single MP3 called 'temp-master-audio.mp3', the audio file's duration is 09:10 and the video file's duration is 08:35. They should be the same and the audio will seem sloooow. I haven't posted code as I have written lots and lots of combinations.

    Read the article

  • A name was started with an invalid character. Error processing resource

    - by Gallen
    Here is the exact error I'm getting when I try to launch my default.aspx file from the published folder. Can anybody point me in the right direction? The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. -------------------------------------------------------------------------------- A name was started with an invalid character. Error processing resource 'file:///C:/inetpub/wwwroot/MHNProServices/Default.... <%@ Page Title="" Language="C#" MasterPageFile="~/ProServices.Master" AutoEventWireup="true" CodeBehind="Default.aspx.cs"... Here are the contents of default.aspx <%@ Page Title="" Language="C#" MasterPageFile="~/ProServices.Master" AutoEventWireup="False" CodeBehind="Default.aspx.cs" Inherits="MHNProServices.Default" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server"> <link type="text/css" href="css/Default.css" rel="Stylesheet" /> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <div id="contentHead"> <img src="css/img/heading_landing.jpg" /> </div> <div id="contentTop"></div> <div id="content"> <div id="contentLeft"> <asp:Image ID="displayPicture" runat="server" /> <img id="displayOverlay"src="css/img/profilepicture_overlay.gif" /> <a id="contentButton_makeAppointment" href="Appointments.aspx?step=start"></a> <a id="contentButton_cancelAppointment" href="Appointments.aspx?step=cancel"></a> </div> <div id="contentRight"> <h3><asp:Label ID="lbl_homepageHeader" runat="server" Text=""></asp:Label></h3> <hr /> <asp:Label ID="lbl_homepageContent" runat="server" Text=""></asp:Label> </div> </div> <div id="contentBottom"></div> </asp:Content>

    Read the article

  • Create text file named after a cell containing other cell data

    - by user143041
    I tried using the code below for the Excel program on my `Mac Mini using the OS X Version 10.7.2 and it keeps saying Error due to file name / path: (The Excel file I am creating is going to be a template with my formulas and macros installed which will be used over and over). Sub CreateFile() Do While Not IsEmpty(ActiveCell.Offset(0, 1)) MyFile = ActiveCell.Value & ".txt" fnum = FreeFile() Open MyFile For Output As fnum Print #fnum, ActiveCell.Offset(0, 1) & " " & ActiveCell.Offset(0, 2) Close #fnum ActiveCell.Offset(1, 0).Select Loop End Sub What Im trying to do: 1st Objective I would like to have the following data to be used to create a text file. A:A is what I need the name of the file to be. B:2 is the content I need in the text file. So, A2 - "repair-video-game-Glassboro-NJ-08028.txt" is the file name and B2 to be the content in the file. Next, A3 is the file name and B3 is the content for the file, etc. ONCE the content reads what is in cell A16 and B16 (length will vary), the file creation should stop, if not then I can delete the additional files created. This sheet will never change. Is there a way to establish the excel macro to always go to this sheet instead of have to select it with the mouse to identify the starting point? 2nd Objective I would like to have the following data to be used to create a text file. A:1 is what I need the name of the file to be. B:B is the content I want in the file. So, A2 - is the file name "geo-sitemap.xml" and B:B to be the content in the file (ignore the .xml file extension in the photo). ONCE the content cell reads what is in cell "B16" (length will vary), the file creation should stop, if not then I can adjust the cells that have need content (formulated content you see in the image is preset for 500 rows). This sheet will never change. Is there a way to establish the excel macro to always go to this sheet instead of have to select it with the mouse to identify the starting point? I can Provide the content in the cells that are filled in by excel formulas that are not not to be included in the .txt files. It is ok if it is not possible. I can delete the extra cells that are not populated (based on the data sheet). Please let me know if you need any more additional information or clarity and I will be happy to provide it.

    Read the article

  • Need help troubleshooting highly variable ping times

    - by Elliot.Bradshaw
    I'm at work using Citrix (think Remote Desktop) to connect to client sites. With my job I have to write a fair bit of code while I'm connected remotely via Citrix, so the latency of my internet connection is important. If I'm getting ping times above 250ms, then it becomes almost impossible to scroll, click or type with accuracy. Recently my Comcast business internet has been exhibiting highly variable ping times. If I ping google.com, I'll get pings that range from 9ms all the way up to 1300ms. The problem seems to be at its worst during the hours of 1PM to 4:30PM. Outside of those hours and the variance in pings settles down, mostly between 9ms and 50ms. The signal to noise ratio and upstream power are both fine on my modem--the values are here: http://pastebin.com/D4hWGPXf I ran a trace route from my computer to google.com (the results of which are here: http://pastebin.com/GcdjYvMh) and did another test ping to the IP of the first hop outside of our local network (73.98.44.1)--the variance in ping times existed in exactly the same manner as if I were pinging Google. Connecting directly to the cable modem by CAT5 makes no difference. Here is a screenshot demonstrating the variance of the ping times: http://postimage.org/image/haocdeauv/full/ -- as you can see it can get pretty bad. Three Comcast techs have been out (two of them were here when the problem wasn't happening) and they as well as the regional tier 2 Comcast support were unable to diagnose the problem. I now have a ticket open with tier 3 support, but have yet to hear back from them. Does anyone know what could cause these sorts of problems or have any idea from the traceroute above where it could be originating? The regional tier 2 guy tried to tell me that what I'm seeing is normal--are highly variable ping times like that ever acceptable? Anything I should ask Comcast to do or look at to get this problem fixed? Any tips/advice much appreciated! Edit: This is Comcast cable internet at a small start-up, we've ruled out congestion in our private LAN as a cause (i.e., no one's watching YouTube when the pings become variable). Update: Tier 3 Comcast support advised swapping out the modem, a tech came here today and did that--same problem persists.

    Read the article

  • How To Configure Remote Desktop To Hyper-V Guest Virtual Machines

    - by Brian Jackett
    Configuring Remote Desktop (RDP) from a host Hyper-V machine to a guest virtual machine can be tricky, so this post is dedicated to the issues and resolution steps I went through to allow RDP.  Cutting to the point, below are the things to look for followed by some explanation about my scenario if you care to read.  This is not an exhaustive list of what is required, just the items that were causing problems for my particular scenario. Requirements Allow Remote Desktop Connections in guest OS. The network adapter type must allow communication with host machine (e.g. use an “Internal” virtual adapter.) If running Server 2008 R2 on guest, network discovery mode must be turned on. If running Server 2008 R2 on guest, the services supporting network discovery mode must be running: - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host My Environment     A quick word about my environment.  I am running Windows Server 2008 R2 with Hyper V on my laptop and numerous guest VMs running Windows Server 2003 R2 or Windows Server 2008 R2.  I run a domain controller VM and then 1 or 2 SharePoint servers depending on my work needs.  I’ve found this setup to work well except when it comes to the display window for my VMs. The Issue     Ever since I began running Hyper-V I haven’t been able to RDP to my guest VMs which means the resolution for my connection windows ha been limited to what the native Hyper-V connections allow.  During personal use I can put the resolution up to 1152 x 864, but during presentations I am usually limited to a measly 800 x 600.  That is until today when I decided to fully investigate why I couldn’t connect via RDP.     First a thank you to John Ross (@johnrossjr), Christina Wheeler (@cwheeler76) and Clayton Cobb (@warrtalon) for various suggestions while I was researching tonight.  As it turns out I had not 1, not 2, but 3 items preventing me from using RDP.  Let’s dig into the requirements above. Allow RDP Connection     This item I had previously taken care of, but it bears repeating because by default Windows Server 2008 R2 does not allow RDP connections.  Change the setting from “Don’t allow…” to whichever “Allow connections…” setting suits your needs.  I chose the less secure option as this is just my dev laptop. Network Adapter Type     When I originally configured my VMs I configured each to use 2 network adapters: one using the physical ethernet adapter for internet use and a virtual private adapter for communication between the VMs.  The connection for the ethernet adapter is an "”External” adapter and thus doesn’t connect between the host and guest.  The virtual private adapter allowed communication ONLY between the VMs and not to my host.  There is a third option “Internal” which allows communication between VMs as well as to the host.  After finding out this distinction I promptly created an Internal network adapter and assigned that to my VMs. Turn On Network Discovery     Seems like a pretty common sense thing, but in order to allow remote desktop connections the target computer must able to be found by the source computer (explained here.)  One of the settings that controls if a computer can be found on the network is aptly named Network Discovery.  By default Windows Server 2008 R2 turns Network Discovery off for security purposes.  To enable it open up the Network and Sharing Center.  Click “Change Advanced Sharing Settings” on the left.  On the following screen select “Turn on network discovery” for the currently used profile and click Save Settings.  You may notice though that your selection to turn on network discovery doesn’t save.  If this is the case then you most likely don’t have the supporting services running (as was my case.) Network Discovery Supporting Services     There are a total of 4 services (listed again below) that need to be running before you can turn on network discovery (explained here.)  The below images highlight these services.  In my guest VM I found that I had DNS Client already running while the other 3 were disabled.  I set them all to enabled and started the ones that were stopped.  After this change I returned to the Sharing settings screen and found that Network Discovery was turned on.  I’m not sure whether this was picking up my attempt to turn it on previously or if starting those services turned it on.  Either way the end result was a success. - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host Before and After Results     The first image is the smaller square shaped viewing window used by the Hyper-V native connection.  The second is the full-screen RDP connection in all its widescreen glory. Conclusion     Over the past few months I’ve found Hyper-V to be very useful for virtualizing my development environments, but I’ve also had a steep learning curve to get various items configured just right.  Allowing RDP connections to guest VMs was one area that I hadn’t been able to get right for the longest time.  Now that I resolved these issues I hope that others can avoid the pitfalls that I ran into.  If you know of any other items I left off feel free to let me know.        -Frog Out   Links Turning on Network Discovery http://sqlblog.com/blogs/john_paul_cook/archive/2009/08/15/remote-desktop-connection-on-windows-server-2008-r2.aspx Services required for Network Discovery http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/2e1fea01-3f2b-4c46-a631-a8db34ed4f84

    Read the article

  • Ask How-To Geek: Dropbox in the Start Menu, Understanding Symlinks, and Ripping TV Series DVDs

    - by Jason Fitzpatrick
    This week we take a look at how to incorporate Dropbox into your Windows Start Menu, understanding and using symbolic links, and how to rip your TV series DVDs right to unique and high-quality episode files. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see our fixes for this week’s reader dilemmas. Add Drobox to Your Start Menu Dear How-To Geek, I use Dropbox all the time and would like to add it right onto my start menu along side the other major shortcuts like Documents, Pictures, etc. It seems like adding Dropbox into the menu should be part of the Dropbox installation package! Sincerely, Dropboxing in Des Moines Dear Dropboxing, We agree, it would be a nice installation option. As it stands you’re going to have to do a little simple hacking to get Dropbox nestled neatly into your start menu. The hack isn’t super elegant but when you’re done you’ll have the link you want and it’ll look like it was there all along. Check out this step-by-step guide here in order to take an existing Library shortcut and rework it to be a Dropbox link. Understanding and Using Symbolic Links Dear How-To Geek, I was talking to a coworker the other day about an issue I’d been having with a media center application I’m running. He suggested using symbolic links to better organize my media and make it easier for the application to access my collection. I had no idea what he was talking about and never got a chance to bug him about it later. Can you clear up this whole symbolic links business for me? I’ve been using computers for years and I’ve never even heard of it! Sincerely, Symbolic Who? Dear Symbolic, Symbolic links aren’t commonly used by many Windows users which is why you likely haven’t run into the concept. Symbolic links are essentially supercharged shortcuts—the newly introduced Windows library system is really just a type of symbolic link system. You can use symbolic links to do all sorts of neat stuff like link folders to your Dropbox folder, organize media, and more. The concept of symbolic links is pretty simple but the execution can be really tricky. We’d suggest reading over our guide to creating symbolic links in Windows 7, Windows XP, and Ubunutu to get a clearer idea what you’re getting into. Rip Your TV DVDs into Handy Episode Files Dear How-To Geek, My wife got me an iPod for Christmas and I still haven’t got around to filling it up. I have tons of entire TV show seasons on DVD and would like to get them on the iPod but I have absolutely no idea where to start. How do I get the shows off the discs? I thought it would be as easy to import the TV shows into iTunes as it is to import tracks off a CD but I was totally wrong. I tried downloading some applications to rip them but those didn’t work at all. Very frustrating! Surely there is an easy and/or automated way to do this, right? Sincerely, Free My DVDs Dear DVDs, Oh man is this a frustration we can relate to. It’s inordinately difficult to get movies and TV shows off physical media and into digital (and portable media player-friendly) formats. There are a multitude of ways to rip DVDs and quite a few applications out there (some good, some mediocre, and some outright malware). We’d recommend a two-part punch to solve your ripping woes. You’ll need a copy of DVDFab to strip away the protections on the discs and rip the disc and Handbrake to load the disc image and convert the files. It’s not quite as smooth as the CD-to-iTunes workflow but it’s still pretty easy. Check out all the steps and settings you’ll want to toggle here. Have a question you want to put before the How-To Geek staff? Shoot us an email at [email protected] and then keep an eye out for a solution in the Ask How-To Geek column. Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines Google’s New Personal Blocklist Extension Kills Search Engine Spam KeyCounter Tracks Your Keystrokes and Mouse Clicks Add Custom LED Ambient Lighting to Your PC or Media Center The Trackor Monitors Amazon Prices; Integrates with Chrome, Firefox, and Safari Four Awesome TRON Legacy Themes for Chrome and Iron Anger is Illogical – Old School Style Instructional Video [Star Trek Mashup]

    Read the article

  • Parallelism in .NET – Part 1, Decomposition

    - by Reed
    The first step in designing any parallelized system is Decomposition.  Decomposition is nothing more than taking a problem space and breaking it into discrete parts.  When we want to work in parallel, we need to have at least two separate things that we are trying to run.  We do this by taking our problem and decomposing it into parts. There are two common abstractions that are useful when discussing parallel decomposition: Data Decomposition and Task Decomposition.  These two abstractions allow us to think about our problem in a way that helps leads us to correct decision making in terms of the algorithms we’ll use to parallelize our routine. To start, I will make a couple of minor points. I’d like to stress that Decomposition has nothing to do with specific algorithms or techniques.  It’s about how you approach and think about the problem, not how you solve the problem using a specific tool, technique, or library.  Decomposing the problem is about constructing the appropriate mental model: once this is done, you can choose the appropriate design and tools, which is a subject for future posts. Decomposition, being unrelated to tools or specific techniques, is not specific to .NET in any way.  This should be the first step to parallelizing a problem, and is valid using any framework, language, or toolset.  However, this gives us a starting point – without a proper understanding of decomposition, it is difficult to understand the proper usage of specific classes and tools within the .NET framework. Data Decomposition is often the simpler abstraction to use when trying to parallelize a routine.  In order to decompose our problem domain by data, we take our entire set of data and break it into smaller, discrete portions, or chunks.  We then work on each chunk in the data set in parallel. This is particularly useful if we can process each element of data independently of the rest of the data.  In a situation like this, there are some wonderfully simple techniques we can use to take advantage of our data.  By decomposing our domain by data, we can very simply parallelize our routines.  In general, we, as developers, should be always searching for data that can be decomposed. Finding data to decompose if fairly simple, in many instances.  Data decomposition is typically used with collections of data.  Any time you have a collection of items, and you’re going to perform work on or with each of the items, you potentially have a situation where parallelism can be exploited.  This is fairly easy to do in practice: look for iteration statements in your code, such as for and foreach. Granted, every for loop is not a candidate to be parallelized.  If the collection is being modified as it’s iterated, or the processing of elements depends on other elements, the iteration block may need to be processed in serial.  However, if this is not the case, data decomposition may be possible. Let’s look at one example of how we might use data decomposition.  Suppose we were working with an image, and we were applying a simple contrast stretching filter.  When we go to apply the filter, once we know the minimum and maximum values, we can apply this to each pixel independently of the other pixels.  This means that we can easily decompose this problem based off data – we will do the same operation, in parallel, on individual chunks of data (each pixel). Task Decomposition, on the other hand, is focused on the individual tasks that need to be performed instead of focusing on the data.  In order to decompose our problem domain by tasks, we need to think about our algorithm in terms of discrete operations, or tasks, which can then later be parallelized. Task decomposition, in practice, can be a bit more tricky than data decomposition.  Here, we need to look at what our algorithm actually does, and how it performs its actions.  Once we have all of the basic steps taken into account, we can try to analyze them and determine whether there are any constraints in terms of shared data or ordering.  There are no simple things to look for in terms of finding tasks we can decompose for parallelism; every algorithm is unique in terms of its tasks, so every algorithm will have unique opportunities for task decomposition. For example, say we want our software to perform some customized actions on startup, prior to showing our main screen.  Perhaps we want to check for proper licensing, notify the user if the license is not valid, and also check for updates to the program.  Once we verify the license, and that there are no updates, we’ll start normally.  In this case, we can decompose this problem into tasks – we have a few tasks, but there are at least two discrete, independent tasks (check licensing, check for updates) which we can perform in parallel.  Once those are completed, we will continue on with our other tasks. One final note – Data Decomposition and Task Decomposition are not mutually exclusive.  Often, you’ll mix the two approaches while trying to parallelize a single routine.  It’s possible to decompose your problem based off data, then further decompose the processing of each element of data based on tasks.  This just provides a framework for thinking about our algorithms, and for discussing the problem.

    Read the article

  • Test Drive Windows 7 Online with Virtual Labs

    - by Matthew Guay
    Did you miss out on the Windows 7 public beta and want to try it out before you actually make the leap and upgrade? Maybe you want to learn how to deploy new features in a business environment. Here’s how you can test drive Windows 7 directly from your browser. Whether you manage 10,000 desktops or simply manage your own laptop, it’s usually best to test out a new OS before installing it.  If you’re upgrading from Windows XP you may find many things unfamiliar.  Microsoft has setup a special Windows 7 Test Drive website with resources to help IT professionals test and deploy Windows 7 in their workplaces.  This is a great resource to try out Windows 7 from the comfort of your browser, and look at some of the new features without even installing it. Please note that the online version is not nearly as responsive as a full standard install of Windows 7.  It also does not run the full Aero interface or desktop effects, and may refresh slowly depending on your Internet connection.  So don’t judge Windows 7’s performance based on this virtual lab, but use it as a way to learn more about Windows 7 without installing it. Getting Started To test drive Windows 7, visit Microsoft’s Windows 7 Test Drive website (link below).  You will need to run the Windows 7 Test Drive in Internet Explorer, as it requires Active X support.  We received this error when attempting to run the Test Drive in Firefox: Now, click the “Take a Test Drive” link on the bottom left of the page. This site includes several test drives to demonstrate different features of Windows 7 and its related ecosystem of products including Windows Server 2008 R2, some of which, including the XP Mode test drive, are not yet ready.  For this test, we selected the MED-V Test drive, as this includes Office 2007 and 2010 so you can test them in Windows 7 as well.  Simply select the test drive you want, and click “Try it now!”   If you haven’t run a Windows test drive before, you will be asked to install an ActiveX control.  Click the link to install. Click the yellow bar at the top of the page in Internet Explorer, and select to Install the add-on.  You may have to approve a UAC prompt to finish the install. Once this is finished, click the link on the bottom of the page to return to your test drive.  The test drive page should automatically refresh; if it doesn’t, click refresh to reload it. Now the test drive will load the components.   Once its fully loaded, click the link to launch Windows 7 in a new window. You may see a prompt warning that the server may have been impersonated.  Simply click Yes to proceed. The test lab will give you some getting started directions; click Close Window when you’re ready to try out Windows 7. Here’s the default desktop in the Windows 7 test drive.  You can use it just like a normal Windows computer, but do note that it may function slowly depending on your internet connection.   This test drive includes both Office 2007 and Office 2010 Tech Preview, so you can try out both in Windows 7 as well. You can try out the new Windows 7 applications such as the reworked Paint with the Ribbon interface from Office. Or you can even test the newest version of Media Center, though it will warn you that it may not function good with the down-scaled graphics in the test drive.   Most importantly, you can try out the new features in Windows 7, such as Jumplists and even Aero Snap.  Once again, these features will not function the quickest, but it does let you test them out. While working with the Virtual Lab, there are different tasks it walks you through. You can also download a copy of the lab manual in PDF format to help you navigate through the various objectives. The test drive system is running Microsoft Forefront Security, the enterprise security solution from which Microsoft Security Essentials has adapted components from. Conclusion These virtual labs are great for tech students, or those of you who want to get a first-hand trial of the new features. Also, if you’re not sure on how to deploy something and want to practice in a virtual environment, these labs are quite valuable.While these labs are geared toward IT professionals, it’s a good way for anyone to try out Windows 7 features from the comfort of your current computer. Test Drive Windows 7 Similar Articles Productive Geek Tips Mount Multiple ISO Images Using Virtual CloneDriveHow To Delete a VHD in Windows 7Keyboard Shortcuts for VMware WorkstationMount an ISO image in Windows 7 or VistaHow To Turn a Physical Computer Into A Virtual Machine with Disk2vhd TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 If it were only this easy SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver

    Read the article

  • Windows Phone 7 development: Using isolated storage

    - by DigiMortal
    In my previous posting about Windows Phone 7 development I showed how to use WebBrowser control in Windows Phone 7. In this posting I make some other improvements to my blog reader application and I will show you how to use isolated storage to store information to phone. Why isolated storage? Isolated storage is place where your application can save its data and settings. The image on right (that I stole from MSDN library) shows you how application data store is organized. You have no other options to keep your files besides isolated storage because Windows Phone 7 does not allow you to save data directly to other file system locations. From MSDN: “Isolated storage enables managed applications to create and maintain local storage. The mobile architecture is similar to the Silverlight-based applications on Windows. All I/O operations are restricted to isolated storage and do not have direct access to the underlying operating system file system. Ultimately, this helps to provide security and prevents unauthorized access and data corruption.” Saving files from web to isolated storage I updated my RSS-reader so it reads RSS from web only if there in no local file with RSS. User can update RSS-file by clicking a button. Also file is created when application starts and there is no RSS-file. Why I am doing this? I want my application to be able to work also offline. As my code needs some more refactoring I provide it with some next postings about Windows Phone 7. If you want it sooner then please leave me a comment here. Here is the code for my RSS-downloader that downloads RSS-feed and saves it to isolated storage file calles rss.xml. public class RssDownloader {     private string _url;     private string _fileName;       public delegate void DownloadCompleteDelegate();     public event DownloadCompleteDelegate DownloadComplete;       public RssDownloader(string url, string fileName)     {         _url = url;         _fileName = fileName;     }       public void Download()     {         var request = (HttpWebRequest)WebRequest.Create(_url);         var result = (IAsyncResult)request.BeginGetResponse(ResponseCallback, request);            }       private void ResponseCallback(IAsyncResult result)     {         var request = (HttpWebRequest)result.AsyncState;         var response = request.EndGetResponse(result);           using(var stream = response.GetResponseStream())         using(var reader = new StreamReader(stream))         using(var appStorage = IsolatedStorageFile.GetUserStoreForApplication())         using(var file = appStorage.OpenFile("rss.xml", FileMode.OpenOrCreate))         using(var writer = new StreamWriter(file))         {             writer.Write(reader.ReadToEnd());         }           if (DownloadComplete != null)             DownloadComplete();     } } Of course I modified RSS-source for my application to use rss.xml file from isolated storage. As isolated storage files also base on streams we can use them everywhere where streams are expected. Reading isolated storage files As isolated storage files are opened as streams you can read them like usual files in your usual applications. The next code fragment shows you how to open file from isolated storage and how to read it using XmlReader. Previously I used response stream in same place. using(var appStorage = IsolatedStorageFile.GetUserStoreForApplication()) using(var file = appStorage.OpenFile("rss.xml", FileMode.Open)) {     var reader = XmlReader.Create(file);                      // more code } As you can see there is nothing complex. If you have worked with System.IO namespace objects then you will find isolated storage classes and methods to be very similar to these. Also mention that application storage and isolated storage files must be disposed after you are not using them anymore.

    Read the article

  • SQL Developer Blitz at ODTUG Kscope12

    - by thatjeffsmith
    Oracle Development Tools User Group (ODTUG) puts on an outstanding event, and I enjoy that the content comes FIRST. Yes, the after-event parties and entertainment are first class, but I look forward most to sitting in on some excellent sessions. For Kscope12 one would expect Oracle to have a large presence, and you would be absolutely correct! The APEX team will be there in full force, and we’ll have sessions on JDeveloper, ADF, and .NET. But what I want to talk about today is our awesome line-up of coverage for Oracle SQL Developer (Surprise!) DB and Developer’s Toolbox Symposium Kris Rice or @krisrice, Product Development Manager for SQL Developer, will speak at 10AM Sunday about SQL Developer Data Modeler. Our free data modeling solution allows one to reverse engineer a data dictionary to a model, modify it, and create a script of the changes. Collaboration is an important part of any development team; with built-in subversion support, the modeler makes collaboration easy, not just possible. After the morning break, I’ll be talking about SQL Developer’s PL/SQL support. From creating your code, to debugging, tuning, testing, and documenting PL/SQL – SQL Developer fits the bill. Since I have a full hour, I should have time to do a little riff on using source control to version and manage your revisions too! At 3:15 Jagan Athreya will talk about the new integration between SQL Developer and Enterprise Manager Cloud Control 12c. Enabling developers to define changes in SQLDeveloper and allowing DBAs to promote these changes to Test and Production via Enterprise Manager will reduce errors, accelerate productivity, and help eliminate unplanned downtime. Get your SQL Developer groove on at ODTUG Kscope12! Presentations SQL Developer Tips and Tricks Monday June 25, Session 5, 4:15 pm – 5:15 pm I’ll take you through my favorite keyboard shortcuts, top 10 preferences every user should tweak, and spotlight features that the average user probably hasn’t discovered yet. My goal for this session is for everyone to take 1-2 tips they can implement immediately to save mucho time. I enjoy interacting with the audience so no two versions of this presentation are the same. Oracle SQL Developer and Data Modeler New Features When: Tuesday June 26, Session 6, 8:30 am – 9:30 am Ashley Chen, my PM-partner-in-crime, will be covering all the new features from our two latest updates. So if you’re new to SQL Developer, or you’ve been using an older version, stop by and see what new toys you have to play with. I also have a bet with Ashley that she will have more attendees than me, so be sure to show up so I can collect. Debugging PL/SQL With SQL Developer When: Wednesday June 27, Session 16, 3:00 pm – 4:00 pm Me again – sorry. This time I have an entire hour to JUST talk about PL/SQL and debugging! Should you use a watch with a break condition, or a breakpoint with a passcount? How does external debugging with a Perl script work? Can I just debug an anonymous PL/SQL block. So if debugging to you is just a DBMS_OUTPUT.PUT_LINE() call, stop by and see how our IDE can help you take things to the next level! Or is that level++? Hands-on-Training SQL Developer Soup to Nuts When: Tuesday, 8:00 AM – 9:30 AM If you learn by doing, this is the session for you. Bring your own laptop or use one of the lab machines. We’ll give you a VirtualBox OEL image running 11gR2 EE Database with all the fixin’s (that’s Southern speak for Partitioning, Advanced Compression, Tuning & Diagnostic Packs, etc), TimesTen, APEX and much more. All you have to do is login and run through our lab exercises. You can start with a model and work your way up to debugging and testing your own appliction, or you can pick and choose your lessons to suit your needs. We’ll have people on hand to help you out and answer your questions. Booth Hours We’ll be in the vendor area and have our very own ‘demo pod’ for SQL Developer. Between Kris, Ashley, and I we should be able to answer your questions or show you how to ‘do that thing’ in the tool. Or just stop by and say hello! We’ll be around the following hours’ish: Sunday, June 24, 2012 6:00 PM – 8:00 PM Monday, June 25, 2012 9:00 AM – 4:30 PM Tuesday, June 26, 2012 9:30 AM – 3:30 PM Wednesday, June 27, 2012 10:15 AM – 2:00 PM No Excuses – If You Have Questions, This is Your Chance to Get Your Answers! We’re doing just about everything outside of a scavenger hunt to bring information and value to our users. Let us know what you like, what you don’t like, and we’ll do our best to do more of the former and less of the latter!

    Read the article

< Previous Page | 716 717 718 719 720 721 722 723 724 725 726 727  | Next Page >