Search Results

Search found 1955 results on 79 pages for 'scott johnson'.

Page 24/79 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • ODI 11g – Oracle Multi Table Insert

    - by David Allan
    With the IKM Oracle Multi Table Insert you can generate Oracle specific DML for inserting into multiple target tables from a single query result – without reprocessing the query or staging its result. When designing this to exploit the IKM you must split the problem into the reusable parts – the select part goes in one interface (I named SELECT_PART), then each target goes in a separate interface (INSERT_SPECIAL and INSERT_REGULAR). So for my statement below… /*INSERT_SPECIAL interface */ insert  all when 1=1 And (INCOME_LEVEL > 250000) then into SCOTT.CUSTOMERS_NEW (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /* INSERT_REGULAR interface */ when 1=1  then into SCOTT.CUSTOMERS_SPECIAL (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /*SELECT*PART interface */ select        CUSTOMERS.EMAIL EMAIL,     CUSTOMERS.CREDIT_LIMIT CREDIT_LIMIT,     UPPER(CUSTOMERS.NAME) NAME,     CUSTOMERS.USER_MODIFIED USER_MODIFIED,     CUSTOMERS.DATE_MODIFIED DATE_MODIFIED,     CUSTOMERS.BIRTH_DATE BIRTH_DATE,     CUSTOMERS.MARITAL_STATUS MARITAL_STATUS,     CUSTOMERS.ID ID,     CUSTOMERS.USER_CREATED USER_CREATED,     CUSTOMERS.GENDER GENDER,     CUSTOMERS.DATE_CREATED DATE_CREATED,     CUSTOMERS.INCOME_LEVEL INCOME_LEVEL from    SCOTT.CUSTOMERS   CUSTOMERS where    (1=1) Firstly I create a SELECT_PART temporary interface for the query to be reused and in the IKM assignment I state that it is defining the query, it is not a target and it should not be executed. Then in my INSERT_SPECIAL interface loading a target with a filter, I set define query to false, then set true for the target table and execute to false. This interface uses the SELECT_PART query definition interface as a source. Finally in my final interface loading another target I set define query to false again, set target table to true and execute to true – this is the go run it indicator! To coordinate the statement construction you will need to create a package with the select and insert statements. With 11g you can now execute the package in simulation mode and preview the generated code including the SQL statements. Hopefully this helps shed some light on how you can leverage the Oracle MTI statement. A similar IKM exists for Teradata. The ODI IKM Teradata Multi Statement supports this multi statement request in 11g, here is an extract from the paper at www.teradata.com/white-papers/born-to-be-parallel-eb3053/ Teradata Database offers an SQL extension called a Multi-Statement Request that allows several distinct SQL statements to be bundled together and sent to the optimizer as if they were one. Teradata Database will attempt to execute these SQL statements in parallel. When this feature is used, any sub-expressions that the different SQL statements have in common will be executed once, and the results shared among them. It works in the same way as the ODI MTI IKM, multiple interfaces orchestrated in a package, each interface contributes some SQL, the last interface in the chain executes the multi statement.

    Read the article

  • Microsoft Cloud Day - the ups and downs

    - by Charles Young
    The term ‘cloud’ can sometimes obscure the obvious.  Today’s Microsoft Cloud Day conference in London provided a good example.  Scott Guthrie was halfway through what was an excellent keynote when he lost network connectivity.  This proved very disruptive to his presentation which centred on a series of demonstrations of the Azure platform in action.  Great efforts were made to find a solution, but no quick fix presented itself.  The venue’s IT facilities were dreadful – no WiFi, poor 3G reception (forget 4G…this is the UK) and, unbelievably, no-one on hand from the venue staff to help with infrastructure issues.  Eventually, after an unscheduled break, a solution was found and Scott managed to complete his demonstrations.  Further connectivity issues occurred during the day. I can say that the cause was prosaic.  A member of the venue staff had interfered with a patch board and inadvertently disconnected Scott Guthrie’s machine from the network by pulling out a cable. I need to state the obvious here.  If your PC is disconnected from the network it can’t communicate with other systems.  This could include a machine under someone’s desk, a mail server located down the hall, a server in the local data centre, an Internet search engine or even, heaven forbid, a role running on Azure. Inadvertently disconnecting a PC from the network does not imply a fundamental problem with the cloud or any specific cloud platform.  Some of the tweeted comments I’ve seen today are analogous to suggesting that, if you accidently unplug your microwave from the mains, this suggests some fundamental flaw with the electricity supply to your house.   This is poor reasoning, to say the least. As far as the conference was concerned, the connectivity issue in the keynote, coupled with some later problems in a couple of presentations, served to exaggerate the perception of poor organisation.   Software problems encountered before the conference prevented the correct set-up of a smartphone app intended to convey agenda information to attendees.  Although some information was available via this app, the organisers decided to print out an agenda at the last moment.  Unfortunately, the agenda sheet did not convey enough information, and attendees were forced to approach conference staff through the day to clarify locations of the various presentations. Despite these problems, the overwhelming feedback from conference attendees was very positive.  There was a real sense of excitement in the morning keynote.  For many, this was their first sight of new Azure features delivered in the ‘spring’ release.  The most common reaction I heard was amazement and appreciation that Azure’s new IaaS features deliver built-in template support for several flavours of Linux from day one.  This coupled with open source SDKs and several presentations on Azure’s support for Java, node.js, PHP, MongoDB and Hadoop served to communicate that the Azure platform is maturing quickly.  The new virtual network capabilities also surprised many attendees, and the much improved portal experience went down very well. So, despite some very irritating and disruptive problems, the event served its purpose well, communicating the breadth and depth of the newly upgraded Azure platform.  I enjoyed the day very much.

    Read the article

  • F1 Pit Pragmatics

    - by mikef
    "I hate computers. No, really, I hate them. I love the communications they facilitate, I love the conveniences they provide to my life. but I actually hate the computers themselves." - Scott Merrill, 'I hate computers: confessions of a Sysadmin' If Scott's goal was to polarize opinion and trigger raging arguments over the 'real reasons why computers suck', then he certainly succeeded. Impassioned vitriol sits side-by-side with rational debate. Yet Scott's fundamental point is absolutely on the money - Computers are a means to an end. The IT industry is finally starting to put weight behind the notion that good User Experience is an absolutely crucial goal, a cause championed by the likes of Microsoft's Bill Buxton, and which Apple's increasingly ubiquitous touch screen interface exemplifies. However, that doesn't change the fact that, occasionally, you just have to man up and deal with complex systems. In fact, sometimes you just need to sacrifice everything else in the name of performance. You'll find a perfect example of this Faustian bargain in Trevor Clarke's fascinating look into the (diabolical) IT infrastructure of modern F1 racing - high performance, high availability. high everything. To paraphrase, each car has up to 100 sensors, transmitting around 30Gb of data over the course of a race (70% in real-time). This data is then processed by no less than 3 servers (per car) so that the engineers in the pit have access to telemetry, strategy information, timing feeds, a connection back to the operations room in the team's home base - the list goes on. All of this while the servers are exposed "to carbon dust, oil, vibration, rain, heat, [and] variable power". Now, this is admittedly an extreme context where there's no real choice but to use complex systems where ease-of-use is, at best, a secondary concern. The flip-side is seen in small-scale personal computing such as that seen in Apple's iDevices, which are incredibly intuitive but limited in their scope. In terms of what kinds of systems they prefer to use, I suspect that most SysAdmins find themselves somewhere along this axis of Power vs. Usability, and which end of this axis you resonate with also hints at where you think the IT industry should focus its energy. Do you see yourself in the F1 pit, making split-second decisions, wrestling with information flows and reticent hardware to bend them to your will? If so, I imagine you feel that computers are subtle tools which need to be tuned and honed, using the advanced knowledge possessed only by responsible SysAdmins (If you have an iPhone, I suspect it's jail-broken). If the machines throw enigmatic errors, it's the price of flexibility and raw power. Alternatively, would you prefer to have your role more accessible, with users empowered by knowledge, spreading the load of managing IT environments? In that case, then you want hardware and software to have User Experience as their primary focus, and are of the "means to an end" school of thought (you're probably also fed up with users not listening to you when you try and help). At its heart, the dichotomy is between raw power (which might be difficult to use) and ease-of-use (which might have some limitations, but you can be up and running immediately). Of course, the ultimate goal is a fusion of flexibility, power and usability all in one system. It's achievable in specific software environments, and Red Gate considers it a target worth aiming for, but in other cases it's a goal right up there with cold fusion. I think it'll be a long time before we see it become ubiquitous. In the meantime, are you Power-Hungry or a Champion of Usability? Cheers, Michael Francis Simple Talk SysAdmin Editor

    Read the article

  • Kill all the project files!

    - by jamiet
    Like many folks I’m a keen podcast listener and yesterday my commute was filled by listening to Scott Hunter being interviewed on .Net Rocks about the next version of ASP.Net. One thing Scott said really struck a chord with me. I don’t remember the full quote but he was talking about how the ASP.Net project file (i.e. the .csproj file) is going away. The rationale being that the main purpose of that file is to list all the other files in the project, and that’s something that the file system is pretty good at. In Scott’s own words (that someone helpfully put in the comments): A file that lists files is really redundant when the OS already does this Romeliz Valenciano correctly pointed out on Twitter that there will still be a project.json file however no longer will there be a need to keep a list of files in a project file. I suspect project.json will simply contain a list of exclusions where necessary rather than the current approach where the project file is a list of inclusions. On the face of it this seems like a pretty good idea. I’ve long been a fan of convention over configuration and this is a great example of that. Instead of listing all the files in a separate file, just treat all the files in the directory as being part of the project. Ostensibly the approach is if its in the directory, its part of the project. Simple. Now I’m not an ASP.net developer, far from it, but it did occur to me that the same approach could be applied to the two Visual Studio project types that I am most familiar with, SSIS & SSDT. Like many people I’ve long been irritated by SSIS projects that display a faux file system inside Solution Explorer. As you can see in the screenshot below the project has Miscellaneous and Connection Managers folders but no such folders exist on the file system: This may seem like a minor thing but it means useful Solution Explorer features like Show All Files and Open Folder in Windows Explorer don’t work and quite frankly it makes me feel like a second class citizen in the Microsoft ecosystem. I’m a developer, treat me like one. Don’t try and hide the detail of how a project works under the covers, show it to me. I’m a big boy, I can handle it! Would it not be preferable to simply treat all the .dtsx files in a directory as being part of a project? I think it would, that’s pretty much all the .dtproj file does anyway (that, and present things in a non-alphabetic order – something else that wildly irritates me), so why not just get rid of the .dtproj file? In the case of SSDT the .sqlproj actually does a whole lot more than simply list files because it also states the BuildAction of each file (Build, NotInBuild, Post-Deployment, etc…) but I see no reason why the convention over configuration approach can’t help us there either. Want to know which is the Post-deployment script? Well, its the one called Post-DeploymentScript.sql! Simple! So that’s my new crusade. Let’s kill all the project files (well, the .dtproj & .sqlproj ones anyway). Are you with me? @Jamiet

    Read the article

  • The Definitive C++ Book Guide and List

    - by grepsedawk
    After more than a few questions about deciding on C++ books I thought we could make a better community wiki version. Providing QUALITY books and an approximate skill level. Maybe we can add a short blurb/description about each book that you have personally read / benefited from. Feel free to debate quality, headings, etc. Note: There is a similar post for C: The Definitive C Book Guide and List Reference Style - All Levels The C++ Programming Language - Bjarne Stroustrup C++ Standard Library Tutorial and Reference - Nicolai Josuttis Beginner Introductory: C++ Primer - Stanley Lippman / Josée Lajoie / Barbara E. Moo Accelerated C++ - Andrew Koenig / Barbara Moo Thinking in C++ - Bruce Eckel (2 volumes, 2nd is more about standard library, but still very good) Best practices: Effective C++ - Scott Meyers Effective STL - Scott Meyers Intermediate More Effective C++ - Scott Meyers Exceptional C++ - Herb Sutter More Exceptional C++ - Herb Sutter C++ Coding Standards: 101 Rules, Guidelines, and Best Practices - Herb Sutter / Andrei Alexandrescu C++ Templates The Complete Guide - David Vandevoorde / Nicolai M. Josuttis Large Scale C++ Software Design - John Lakos Above Intermediate Modern C++ Design - Andrei Alexandrescu C++ Template Metaprogramming - David Abrahams and Aleksey Gurtovoy Inside the C++ Object Model - Stanley Lippman Classics / Older Note: Some information contained within these books may not be up to date and no longer considered best practice. The Design and Evolution of C++ - Bjarne Stroustrup Ruminations on C++ Andrew Koenig / Barbara Moo Advanced C++ Programming Styles and Idioms - James Coplien

    Read the article

  • IIS: 404 error on every file in a virtual directory.

    - by Scott Chamberlain
    I am trying to write my first WCF service for IIS 6.0. I followed the instructions on MSDN. I created the virtual directory, I can browse the directory fine but anything I click (even a sub-folder in that folder) gives me a 404 error. What am I missing that I can not access any files or folders? Any logs or whatnot you need just tell me where to find them in the comments and I will post them. UPDATE- Found the log, here is what it says when I connect and try to click on a sub folder. #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2010-03-07 19:08:07 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2010-03-07 19:08:07 W3SVC1 74.62.95.101 GET /prx2.php hash=AA70CBCE8DDD370B4A3E5F6500505C6FBA530220D856 80 - 221.192.199.35 Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.0) 404 0 2 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2010-03-07 22:21:20 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2010-03-07 22:21:20 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 401 2 2148074254 2010-03-07 22:21:26 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 401 1 0 2010-03-07 22:21:26 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 webinfinity\srchamberlain 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 200 0 0 2010-03-07 22:21:29 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/bin/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 404 0 2 --Update again I found this here IIS6 Dynamic Content: A 404.2 entry in the W3C Extended Log file is recorded when a Web Extension is not enabled. Use the IIS Microsoft Management Console (MMC) snap-in to enable the appropriate Web extension. Default Web Extensions include: ASP, ASP.net, Server-Side Includes, WebDAV publishing, FrontPage Server Extensions, Common Gateway Interface (CGI). Custom extensions must be added and explicitly enabled. See the IIS 6.0 Help File for more information. I am guessing the 404 0 2 at the end of the log is a 404.2 error. I now know the why, I still don't know the how on how to fix it.

    Read the article

  • IIS: 404 error on every file in a virtual directory.

    - by Scott Chamberlain
    I am trying to write my first WCF service for IIS 6.0. I followed the instructions on MSDN. I created the virtual directory, I can browse the directory fine but anything I click (even a sub-folder in that folder) gives me a 404 error. What am I missing that I can not access any files or folders? Any logs or whatnot you need just tell me where to find them in the comments and I will post them. UPDATE- Found the log, here is what it says when I connect and try to click on a sub folder. #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2010-03-07 19:08:07 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2010-03-07 19:08:07 W3SVC1 74.62.95.101 GET /prx2.php hash=AA70CBCE8DDD370B4A3E5F6500505C6FBA530220D856 80 - 221.192.199.35 Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.0) 404 0 2 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2010-03-07 22:21:20 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2010-03-07 22:21:20 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 401 2 2148074254 2010-03-07 22:21:26 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 401 1 0 2010-03-07 22:21:26 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 webinfinity\srchamberlain 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 200 0 0 2010-03-07 22:21:29 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/bin/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 404 0 2

    Read the article

  • Booting from integrated RAID controller when another RAID controller is installed in a PCIe slot

    - by Antony Scott
    I have a GA MA785GT UD3H motherboard with Windows Server 2008 R2 installed on a RAID1 using the on-board RAID controller. I have now installed a RocketRaid 2680 controller and set up a RAID5 for all my data to be stored on. Unfortunately I now cannot boot from the RAID1 anymore, the PC is trying to boot from the RAID5! Does anyone have any experience of this motherboard / RAID controller combination?

    Read the article

  • Executing Oracle SQLPlus in a Powershell Invoke-Command statement against a remote machine

    - by Scott Muc
    We have a basic powershell script that attempts to execute SQLPlus.exe on a remote machine. The remote does not have Oracle Instant client installed, but we have bundled all the necesary dlls in a remote folder. For example we have sqlplus.exe and dependencies in the directory C:\temp\oracle. If I navigate to that path on the remote server and execute sqlplus.exe it runs just fine. I get the prompt for username. If I go: Invoke-Command -comp remote.machine.host -ScriptBlock { C:\temp\oracle\sqplus.exe } I get the following: Error 57 initializing SQL*Plus + CategoryInfo : NotSpecified: (Error 57 initializing SQL*Plus:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Error loading message shared library Thinking that it's potentially a PATH issue I tried the following: Invoke-Command -comp remote.machine.host -ScriptBlock { $env:ORACLE_HOME= "C:\temp\oracle"; $env:PATH = "$env:ORACLE_HOME; C:\temp\oracle\sqlplus.exe } This had the same result. The error code is not very helpful and is extremely frustrating since it does work when I log on to the machine. What is powershell remoting doing that's making this not work?

    Read the article

  • Apachebench on node.js server returning "apr_poll: The timeout specified has expired (70007)" after ~30 requests

    - by Scott
    I just started working with node.js and doing some experimental load testing with ab is returning an error at around 30 requests or so. I've found other pages showing a lot better concurrency numbers than I am such as: http://zgadzaj.com/benchmarking-nodejs-basic-performance-tests-against-apache-php Are there some critical server configuration settings that need done to achieve those numbers? I've watched memory on top and I still see a decent amount of free memory while running ab, watched mongostat as well and not seeing anything that looks suspicious. The command I'm running, and the error is: ab -k -n 100 -c 10 postrockandbeyond.com/ This is ApacheBench, Version 2.0.41-dev <$Revision: 1.121.2.12 $> apache-2.0 Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright (c) 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking postrockandbeyond.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 32 requests completed Does anyone have any suggestions on things I should look in to that may be causing this? I'm running it on osx lion, but have also run the same command on the server with the same results. EDIT: I eventually solved this issue. I was using a TTAPI, which was connecting to turntable.fm through websockets. On the homepage, I was connecting on every request. So what was happening was that after a certain number of connections, everything would fall apart. If you're running into the same issue, check out whether you are hitting external services each request.

    Read the article

  • SCVMM 2008 R2 problems migrating VM from VS2005 to Hyper-V host

    - by Scott Ivey
    I have System Center Virtual Machine Manager 2008 R2 installed, and have a Hyper-V R2 host and a Virtual Server 2005 host. I'm trying to migrate my machines from the VS2005 host to the Hyper-V host, and keep getting the following error... VMM is unable to complete the requested file transfer. The connection to the HTTP server myserver.mydomain.local could not be established. (Unknown error (0x80072efd)) Recommended Action Ensure that the HTTP service and/or the agent on the machine myserver.mydomain.local are installed and running and that a firewall is not blocking HTTPS traffic. (Note - migrations between Hyper-V hosts managed by the VMM server work fine - my problem is just going from VS2005-Hyper-V hosts) I have no firewalls turned on on either of the servers, and no firewalls in the middle. I've looked all over for answers to this problem, and am getting nowhere. All the articles I find when searching are talking about either V2V or P2V - and i'm just trying to do a straight migrate VM. I've tried rebooting the boxes, changing the BITS SSL port number, restarting services, triple-checking firewalls, etc. Does anyone have any good suggestions as to how I can resolve this problem?

    Read the article

  • Why isn't ICMP routing with iptables nat routing

    - by Scott Forsyth - MVP
    I'm using iptables on Ubuntu server to route a public IP to a private IP. I want to nat all traffic, including 80, 443 and ICMP. However, it appears that ICMP isn't routing. I have a steady ping going to the public IP and it never stops, even with NAT pointing to a bogus IP. Here are the rules that I'm using: iptables -t nat -I PREROUTING -d 206.72.119.76 -j DNAT --to-destination 10.240.5.5 iptables -t nat -I POSTROUTING -s 10.240.5.5 -j SNAT --to-source 206.72.119.76 I tried with rules for ICMP specifically, but no such luck: iptables -t nat -I PREROUTING -d 206.72.119.76 - icmp --icmp-type echo-request -j DNAT --to-destination 10.240.5.5 Any ideas?

    Read the article

  • Getting SMB file shares working over a PPTP VPN

    - by Ben Scott
    I'm having issues getting SMB file shares working over a PPTP VPN. The server setup consists of a security device (DrayTek V3300) which passes the PPTP authentication to a SBS2003 server running RRAS. The server is the DC and provides DNS and WINS, the single NIC's name server is set to the NIC's IP (192.168...), and DHCP on the DrayTek sets the server IP as the DNS. If I create a new VPN connection in Win7, leaving everything as default apart from the server, username, password and domain, I can: ping everything by IP address resolve IPs with nslookup using their fully-qualified name, as in nslookup fileserver.mydomain.local ping machines by fully-qualified name, as in ping fileserver.mydomain.local However if I try to access a file share: within Explorer, I get "Windows cannot access ..." with "Error code: 0x80004005 Unspecified Error", using net use z: \\fileserver.mydomain.local\share, I get "System error 53 has occurred. The network path was not found." If I add the machine name to my HOSTS file I can use the file share, which is my last-ditch workaround, but I have a number of VPN users and would rather a solution that doesn't involve me trying to hand-edit system files on computers half a country away. If I set the WINS server explicitly in the connection's IPv4 settings I don't have to use the FQN to ping the machine, but that doesn't change anything else. EDIT: The PC I'm having the issue on is running Win 7 Home Premium. After more testing I actually have two other PCs that work, one W7HP, one XP Home, and another Vista PC that doesn't work (not tested as much as the others), all four on the same internet connection (behind the same router). All of them were tested with a straight-forward, all defaults, new VPN configuration.

    Read the article

  • Oops, no RSA or DSA server certificate found for 'server.host.name:0'?

    - by Scott Warren
    I'm setting up a new web server that hosts a dozen virtual hosts on Ubuntu 12.4 using Apache 2.2.22 with one config file per site. I created all the configuration files all at once and ran a2ensite * to enable them all at once. When I reloaded the configuration it failed and after restarting apache I found the following error message in my error.log: Oops, no RSA or DSA server certificate found for 'server.host.name:0'?! Most of the results for this error message are years old that don't fix the problem or are bugs that have been fixed https://issues.apache.org/bugzilla/show_bug.cgi?id=31709

    Read the article

  • RoboCopy fails with "the specified network name is no longer available"

    - by Justin Scott
    We have a scheduled task that runs robocopy periodically to mirror a rather large folder structure from one server to another (thousands of folders, 100,000+ files, 50+ GB in size). There is a share on the receiving server where the mirror gets stored. We're running the task from the origin server connecting out to the share on the receiving end. Both servers run Windows Server 2003 and are connected to the same network switch (100Mbps). The process will sometimes complete all the way through without error. More often than not, however, at some point during the process (seems random as to where), robocopy will fail with the error The specified network name is no longer available. It will wait 30 seconds and try the file again and eventually give up after a number of retries. Process will repeat at the next schedule interval and may complete... or not. When this occurs I am not able to access the share at all on the destination server from anywhere on the network for up to 30 minutes. There is nothing else on the network using this share. My question is what does this error mean specifically? Why is the share "dropping off" and becoming inaccessible? Is there a way to prevent it and get the file mirroring to be more stable?

    Read the article

  • TFS 2010 : Unable to add Project to a collection

    - by Scott
    This morning I'm trying to setup Team Foundation Server 2010 to demo for my team. As this is just a demo, I thought I would install it on my Windows 7 machine which also serves as my development machine. My development machine uses Visual Studio 2008 Team Suite. I installed Team Explorer 2008 and then reapplied SP1. Finally I installed and setup TFS 2010. TFS by default gave me administrator privileges. I started up Visual Studios, and connected up to the Collection just fine. However, I'm unable to create a new project and get the follow error message: "TF30172: You are trying to create a team project either without required permissions or with an older version of team Explorer. Contact your project admin..." To check to permissions, I used my home computer which is running Visual Studio 2010. On this machine I was able to connect up to the same TFS instance and create a project no problem. So it looks as though it is a team explorer problem, but everywhere on the web people are saying not only am what I'm trying to do possible, but they have done it themselves. What am I missing to add a project to TFS 2010 under Visual Studio 2008?

    Read the article

  • Outlook 2007 won't close

    - by Scott Weinstein
    I use Outlook 2007 at home as an IMAP client and RSS feed reader. I have a problem that when I close outlook, the window exits, but the process remains running. This prevents me from opening outlook again and on Win7 prevents rapid shutdown of my computer. How can I have Outlook 2007 exit for real? Edit: Here's what the addins dialog reports Active: None Inactive: MS Outlook Mobile Service, MS VBA for Outlook, OneNote Notes for Outlook Items, Outlook Change Notifier, Windows Search Email indexer.

    Read the article

  • ClearOS - how to create a site to site VPN between two ClearOS boxes?

    - by Scott Szretter
    I plan on setting up some ClearOS boxes at several sites, and would like to set up site-to-site VPN between the remote sites and a main site (all running ClearOS enterprise 5.2sp1 / latest version). I have found references for how to set up ClearOS to VPN in to devices such as cisco for IPSEC, and others with PPTP. But for these methods it did not mention how you might configure 2 ClearOS boxes to talk to each other ipsec or pptp. I also saw documentation on installing OpenVPN and using the OpenVPN client software to VPN in to the ClearOS box. I will probably use this for individual users to VPN in, but I have some small sites ( 1 to 10 users) that will have their own ClearOS box and need to create a site to site VPN link back to the main site's OpenVPN box. Is this possible, can you point me to docs, or other info or basically, how? A couple updates: I did find a thread that asks the same basic question, where the user has a vpn set up between the two clearos machines (after installing ipsec vpn modules), just not transporting traffic between the LANS - and the very last post claims you have to edit some files (/etc/ipsec.conf) and set leftnexthop rightnexthop values to %direct. After that, it's supposed to work. Could it be that simple? I also posted to clear foundation, and they pointed me to some documentation for setting up ipsec unmanaged vpn. This looks pretty good, but, I will most likely need to figure out how to handle a dynamic dns type setup at least on one end. Also, what does it mean by multi-wan? Finally, what happens when a vpn connection goes down exactly - someone has to reboot the box or ?

    Read the article

  • Windows 7 Office 2007 GPO install

    - by Scott
    I have a GPO that works fine for installing Office 2007 Pro Plus on Vista and XP but when it installs Office on Windows 7 somehow the office key does not get entered via the customized msp, and needs to be entered manually. Has anyone else run into this? Any suggestions for a fix? Its defeats the purpose of remote unattended install if I then have to run around entering the stupid key. edit: I am sorry I should have specified I also have the config.xml file customized already. I have it set to display level none, completion no, suppress modal to yes accept eula to yes, the key put in and the company name and the username variable (%USERNAME%).

    Read the article

  • Imagemagick PDF to JPG conversion failing

    - by Scott
    I'm trying to convert the first page of a PDF to a JPG. I'm pretty sure I got this to work with certain PDFs, but is it really possible that certain PDFs are made incorrectly and cannot be converted? I tried running this first: $ convert 10-03-26.pdf[1] test.jpg And I got the follow: Error: /syntaxerror in readxref Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- Dictionary stack: --dict:1062/1417(ro)(G)-- --dict:0/20(G)-- --dict:73/200(L)-- --dict:73/200(L)-- --dict:97/127(ro)(G)-- --dict:229/230(ro)(G)-- --dict:14/15(L)-- Current allocation mode is local ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 convert: Postscript delegate failed `10-03-26.pdf'. Running this instead: $ convert -verbose -colorspace rgb '10-03-26.pdf[1]' test.jpg I get the following: Error: /syntaxerror in readxref Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- Dictionary stack: --dict:1062/1417(ro)(G)-- --dict:0/20(G)-- --dict:73/200(L)-- --dict:73/200(L)-- --dict:97/127(ro)(G)-- --dict:229/230(ro)(G)-- --dict:14/15(L)-- Current allocation mode is local ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 "gs" -q -dBATCH -dSAFER -dMaxBitmap=500000000 -dNOPAUSE -dAlignToPixels=0 "-sDEVICE=pnmraw" -dTextAlphaBits=4 -dGraphicsAlphaBits=4 "-g792x1611" "-r72x72" -dFirstPage=2 -dLastPage=2 "-sOutputFile=/tmp/magick-XXU3T44P" "-f/tmp/magick-XXoMKL8Z" "-f/tmp/magic2eec1F"Start of Image Define Huffman Table 0x00 0 1 5 1 1 1 1 1 1 0 0 0 0 0 0 0 Define Huffman Table 0x01 0 3 1 1 1 1 1 1 1 1 1 0 0 0 0 0 Define Huffman Table 0x10 0 2 1 3 3 2 4 3 5 5 4 4 0 0 1 125 Define Huffman Table 0x11 0 2 1 2 4 4 3 4 7 5 4 4 0 1 2 119 End Of Image convert: Postscript delegate failed `10-03-26.pdf'. Why would the conversion fail? Just as an aside, this is happening on a (gs) Grid-Service on (mt) Media Temple hosting. I cannot install programs on the server, but both Imagemagick and Ghostscript are installed Thanks!

    Read the article

  • SCVMM 2008 R2 problems migrating VM from VS2005 to Hyper-V host

    - by Scott Ivey
    I have System Center Virtual Machine Manager 2008 R2 installed, and have a Hyper-V R2 host and a Virtual Server 2005 host. I'm trying to migrate my machines from the VS2005 host to the Hyper-V host, and keep getting the following error... VMM is unable to complete the requested file transfer. The connection to the HTTP server myserver.mydomain.local could not be established. (Unknown error (0x80072efd)) Recommended Action Ensure that the HTTP service and/or the agent on the machine myserver.mydomain.local are installed and running and that a firewall is not blocking HTTPS traffic. (Note - migrations between Hyper-V hosts managed by the VMM server work fine - my problem is just going from VS2005-Hyper-V hosts) I have no firewalls turned on on either of the servers, and no firewalls in the middle. I've looked all over for answers to this problem, and am getting nowhere. All the articles I find when searching are talking about either V2V or P2V - and i'm just trying to do a straight migrate VM. I've tried rebooting the boxes, changing the BITS SSL port number, restarting services, triple-checking firewalls, etc. Does anyone have any good suggestions as to how I can resolve this problem?

    Read the article

  • Windows Explorer is blank

    - by Scott Mitchell
    I am using Windows 7 Utlimate x64. Once a week, or so, when I boot up n the morning and launch Windows Explorer it shows up blank, as the following screen shot show. Clicking on my Computer doesn't load anything. Interestingly, I can go the the Address bar at the top and type in a folder name. This brings up that folder's files and subfolders, but as I drill around the tree of folders on the left only shows the immediate folder and not its siblings. There's no plus icon to expand the folder, etc. My usual "solution" is to reboot, which typically brings everything back to normal, but this is a frustrating remedy. Any idea what's going on and how to fix it? Some Googling turned up this discussion, but the remedy was to uninstall a particular piece of software that I don't have installed (Virtual Clone Drive). Thanks

    Read the article

  • Is there a Unix/Linux platform equivalent to Telligent Community?

    - by Scott A. Lawrence
    Telligent Community combines blogs, wikis, forums, and file-sharing capabilities into a single product with single sign-on, using all Microsoft technologies. Is there an equivalent offering that runs on Unix/Linux? Or would I have to pick and choose individual product offerings and figure out another option for single sign-on across them? Are there plug-ins for something like WordPress or MovableType that might add the necessary functionality? A friend of mine is looking to add a "members-only" area to her company's website, and since they're hosted on Dreamhost (and can't afford StackExchange pricing yet), I'm trying to find other options for them.

    Read the article

  • Enable RemoteApp Full Desktop programmatically

    - by Scott Chamberlain
    I am writing a powershell script to set up some HyperV VM's however there is one step I am having trouble automating. How do I check the box to allow Remote desktop access from the RemoteApp settings programmatically? I can set up all of my customizations I need by doing #build the secrity descriptor so the desktop only shows up for people who should be allowed to see it $remoteDesktopUsersSid = New-Object System.Security.Principal.SecurityIdentifier($remoteDesktopUsersGroup.objectSid[0],0) $aceTemplet = 'O:WDG:WDD:ARP(A;CIOI;CCLCSWLORCGR;;;{0})' $securityDescriptor = $aceTemplet -f $remoteDesktopUsersSid #get a copy of the WMI instance $tsRemoteDesktop = Get-WmiObject -Namespace root\CIMV2\TerminalServices -Class Win32_TSRemoteDesktop #set settings $tsRemoteDesktop.Name = $ServerDisplayName $tsRemoteDesktop.SecurityDescriptor = $securityDescriptor $tsRemoteDesktop.IconPath = $IconPath $tsRemoteDesktop.IconIndex = $IconIndex #push settings back to server Set-WmiInstance -InputObject $tsRemoteDesktop -PutType UpdateOnly however the instance of that WMI object does not exist until after you have the above box checked. I attempted to use Set-WmiInstance to instantiate and set the settings at the same time but I keep getting errors like: Set-WmiInstance : At line:53 char:16 + Set-WmiInstance <<<< -Namespace root\CIMV2\TerminalServices -Class Win32_TSRemoteDesktop -Arguments @{Alias='TSRemoteDesktop';Name=$ServerDisplayName;ShowInPortal=$true;SecurityDescriptor=$securityDescriptor} + CategoryInfo : NotSpecified: (:) [Set-WmiInstance], ArgumentException + FullyQualifiedErrorId : System.ArgumentException,Microsoft.PowerShell.Commands.SetWmiInstance (also after running the command and getting the error it will delete the instance of Win32_TSRemoteDesktop if it already exited and un-check the box in the properties setting) Is there any way to programmatically check that box or can anyone help with why Set-WmiInstance throws that error?

    Read the article

  • How to find and diagnose poorly-performing Firefox addins?

    - by Scott Bilas
    I use a lot of addins in my Firefox. One of them is causing periodic freezes when watching video. Super annoying. Aside from doing a binary search of disabling addins to narrow down the problem (which would take a very long time due to frequency of the freezes), are there any other options? If the problem came from the native app, then I'd just load up a profiler and see where the time is going. But it's all in Javascript. Are there any tools that exist to help figure this out? Maybe some instrumentation I can throw on a few key source files in a local build to help diagnose the problem?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >