Search Results

Search found 16899 results on 676 pages for 'local'.

Page 149/676 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • How to Reuse Your Old Wi-Fi Router as a Network Switch

    - by Jason Fitzpatrick
    Just because your old Wi-Fi router has been replaced by a newer model doesn’t mean it needs to gather dust in the closet. Read on as we show you how to take an old and underpowered Wi-Fi router and turn it into a respectable network switch (saving your $20 in the process). Image by mmgallan. Why Do I Want To Do This? Wi-Fi technology has changed significantly in the last ten years but Ethernet-based networking has changed very little. As such, a Wi-Fi router with 2006-era guts is lagging significantly behind current Wi-Fi router technology, but the Ethernet networking component of the device is just as useful as ever; aside from potentially being only 100Mbs instead of 1000Mbs capable (which for 99% of home applications is irrelevant) Ethernet is Ethernet. What does this matter to you, the consumer? It means that even though your old router doesn’t hack it for your Wi-Fi needs any longer the device is still a perfectly serviceable (and high quality) network switch. When do you need a network switch? Any time you want to share an Ethernet cable among multiple devices, you need a switch. For example, let’s say you have a single Ethernet wall jack behind your entertainment center. Unfortunately you have four devices that you want to link to your local network via hardline including your smart HDTV, DVR, Xbox, and a little Raspberry Pi running XBMC. Instead of spending $20-30 to purchase a brand new switch of comparable build quality to your old Wi-Fi router it makes financial sense (and is environmentally friendly) to invest five minutes of your time tweaking the settings on the old router to turn it from a Wi-Fi access point and routing tool into a network switch–perfect for dropping behind your entertainment center so that your DVR, Xbox, and media center computer can all share an Ethernet connection. What Do I Need? For this tutorial you’ll need a few things, all of which you likely have readily on hand or are free for download. To follow the basic portion of the tutorial, you’ll need the following: 1 Wi-Fi router with Ethernet ports 1 Computer with Ethernet jack 1 Ethernet cable For the advanced tutorial you’ll need all of those things, plus: 1 copy of DD-WRT firmware for your Wi-Fi router We’re conducting the experiment with a Linksys WRT54GL Wi-Fi router. The WRT54 series is one of the best selling Wi-Fi router series of all time and there’s a good chance a significant number of readers have one (or more) of them stuffed in an office closet. Even if you don’t have one of the WRT54 series routers, however, the principles we’re outlining here apply to all Wi-Fi routers; as long as your router administration panel allows the necessary changes you can follow right along with us. A quick note on the difference between the basic and advanced versions of this tutorial before we proceed. Your typical Wi-Fi router has 5 Ethernet ports on the back: 1 labeled “Internet”, “WAN”, or a variation thereof and intended to be connected to your DSL/Cable modem, and 4 labeled 1-4 intended to connect Ethernet devices like computers, printers, and game consoles directly to the Wi-Fi router. When you convert a Wi-Fi router to a switch, in most situations, you’ll lose two port as the “Internet” port cannot be used as a normal switch port and one of the switch ports becomes the input port for the Ethernet cable linking the switch to the main network. This means, referencing the diagram above, you’d lose the WAN port and LAN port 1, but retain LAN ports 2, 3, and 4 for use. If you only need to switch for 2-3 devices this may be satisfactory. However, for those of you that would prefer a more traditional switch setup where there is a dedicated WAN port and the rest of the ports are accessible, you’ll need to flash a third-party router firmware like the powerful DD-WRT onto your device. Doing so opens up the router to a greater degree of modification and allows you to assign the previously reserved WAN port to the switch, thus opening up LAN ports 1-4. Even if you don’t intend to use that extra port, DD-WRT offers you so many more options that it’s worth the extra few steps. Preparing Your Router for Life as a Switch Before we jump right in to shutting down the Wi-Fi functionality and repurposing your device as a network switch, there are a few important prep steps to attend to. First, you want to reset the router (if you just flashed a new firmware to your router, skip this step). Following the reset procedures for your particular router or go with what is known as the “Peacock Method” wherein you hold down the reset button for thirty seconds, unplug the router and wait (while still holding the reset button) for thirty seconds, and then plug it in while, again, continuing to hold down the rest button. Over the life of a router there are a variety of changes made, big and small, so it’s best to wipe them all back to the factory default before repurposing the router as a switch. Second, after resetting, we need to change the IP address of the device on the local network to an address which does not directly conflict with the new router. The typical default IP address for a home router is 192.168.1.1; if you ever need to get back into the administration panel of the router-turned-switch to check on things or make changes it will be a real hassle if the IP address of the device conflicts with the new home router. The simplest way to deal with this is to assign an address close to the actual router address but outside the range of addresses that your router will assign via the DHCP client; a good pick then is 192.168.1.2. Once the router is reset (or re-flashed) and has been assigned a new IP address, it’s time to configure it as a switch. Basic Router to Switch Configuration If you don’t want to (or need to) flash new firmware onto your device to open up that extra port, this is the section of the tutorial for you: we’ll cover how to take a stock router, our previously mentioned WRT54 series Linksys, and convert it to a switch. Hook the Wi-Fi router up to the network via one of the LAN ports (consider the WAN port as good as dead from this point forward, unless you start using the router in its traditional function again or later flash a more advanced firmware to the device, the port is officially retired at this point). Open the administration control panel via  web browser on a connected computer. Before we get started two things: first,  anything we don’t explicitly instruct you to change should be left in the default factory-reset setting as you find it, and two, change the settings in the order we list them as some settings can’t be changed after certain features are disabled. To start, let’s navigate to Setup ->Basic Setup. Here you need to change the following things: Local IP Address: [different than the primary router, e.g. 192.168.1.2] Subnet Mask: [same as the primary router, e.g. 255.255.255.0] DHCP Server: Disable Save with the “Save Settings” button and then navigate to Setup -> Advanced Routing: Operating Mode: Router This particular setting is very counterintuitive. The “Operating Mode” toggle tells the device whether or not it should enable the Network Address Translation (NAT)  feature. Because we’re turning a smart piece of networking hardware into a relatively dumb one, we don’t need this feature so we switch from Gateway mode (NAT on) to Router mode (NAT off). Our next stop is Wireless -> Basic Wireless Settings: Wireless SSID Broadcast: Disable Wireless Network Mode: Disabled After disabling the wireless we’re going to, again, do something counterintuitive. Navigate to Wireless -> Wireless Security and set the following parameters: Security Mode: WPA2 Personal WPA Algorithms: TKIP+AES WPA Shared Key: [select some random string of letters, numbers, and symbols like JF#d$di!Hdgio890] Now you may be asking yourself, why on Earth are we setting a rather secure Wi-Fi configuration on a Wi-Fi router we’re not going to use as a Wi-Fi node? On the off chance that something strange happens after, say, a power outage when your router-turned-switch cycles on and off a bunch of times and the Wi-Fi functionality is activated we don’t want to be running the Wi-Fi node wide open and granting unfettered access to your network. While the chances of this are next-to-nonexistent, it takes only a few seconds to apply the security measure so there’s little reason not to. Save your changes and navigate to Security ->Firewall. Uncheck everything but Filter Multicast Firewall Protect: Disable At this point you can save your changes again, review the changes you’ve made to ensure they all stuck, and then deploy your “new” switch wherever it is needed. Advanced Router to Switch Configuration For the advanced configuration, you’ll need a copy of DD-WRT installed on your router. Although doing so is an extra few steps, it gives you a lot more control over the process and liberates an extra port on the device. Hook the Wi-Fi router up to the network via one of the LAN ports (later you can switch the cable to the WAN port). Open the administration control panel via web browser on the connected computer. Navigate to the Setup -> Basic Setup tab to get started. In the Basic Setup tab, ensure the following settings are adjusted. The setting changes are not optional and are required to turn the Wi-Fi router into a switch. WAN Connection Type: Disabled Local IP Address: [different than the primary router, e.g. 192.168.1.2] Subnet Mask: [same as the primary router, e.g. 255.255.255.0] DHCP Server: Disable In addition to disabling the DHCP server, also uncheck all the DNSMasq boxes as the bottom of the DHCP sub-menu. If you want to activate the extra port (and why wouldn’t you), in the WAN port section: Assign WAN Port to Switch [X] At this point the router has become a switch and you have access to the WAN port so the LAN ports are all free. Since we’re already in the control panel, however, we might as well flip a few optional toggles that further lock down the switch and prevent something odd from happening. The optional settings are arranged via the menu you find them in. Remember to save your settings with the save button before moving onto a new tab. While still in the Setup -> Basic Setup menu, change the following: Gateway/Local DNS : [IP address of primary router, e.g. 192.168.1.1] NTP Client : Disable The next step is to turn off the radio completely (which not only kills the Wi-Fi but actually powers the physical radio chip off). Navigate to Wireless -> Advanced Settings -> Radio Time Restrictions: Radio Scheduling: Enable Select “Always Off” There’s no need to create a potential security problem by leaving the Wi-Fi radio on, the above toggle turns it completely off. Under Services -> Services: DNSMasq : Disable ttraff Daemon : Disable Under the Security -> Firewall tab, uncheck every box except “Filter Multicast”, as seen in the screenshot above, and then disable SPI Firewall. Once you’re done here save and move on to the Administration tab. Under Administration -> Management:  Info Site Password Protection : Enable Info Site MAC Masking : Disable CRON : Disable 802.1x : Disable Routing : Disable After this final round of tweaks, save and then apply your settings. Your router has now been, strategically, dumbed down enough to plod along as a very dependable little switch. Time to stuff it behind your desk or entertainment center and streamline your cabling.     

    Read the article

  • BFCM &ndash; Big Fat Check Mark

    - by onefloridacoder
    I was installing TFS on my local laptop last week and just got around to setting up my initial collection using the TFS Console tool and “Bang!”  I received a message that told me that my local database didn’t have the full-text search option installed.  I remember the option in a (long) list of options and didn’t remember fiddling with it.   Whatever the reason, if you are installing TFS Basic on your box, make sure you have that little check ticked, or you won’t get the big fat one pictured above.  I installed SQL 2008 Developer edition which worked well for what I needed so far, and just needed to run the “Add Feature” option instead of the “Repair” option. HTH

    Read the article

  • Problem with executing dssp (secondary structure assignment)

    - by Mana
    I followed a previous post to install dssp on ubuntu How to install dssp (secondary structure assignments) under 12.04? After the installation I tried to execute dssp which was in /usr/local/bin/dssp But it gave me the following error bash: /usr/local/bin/dssp: cannot execute binary file Also I tried to analyse some trajectory files from a simulation using the code do_dssp -s md.tpr -f traj.xtc But it also failed giving me the error below Reading file md.tpr, VERSION 4.5.5 (single precision) Reading file md.tpr, VERSION 4.5.5 (single precision) Segmentation fault (core dumped) Please post me a solution for this problem. Thank you!

    Read the article

  • rsnapshot intervals in configuration file…

    - by Patrick
    A simple question about rsnapshot. In order to perform daily backups I'm going to add lines to cron in my Ubuntu. Then, why do I have also these lines in the rsnapshot.conf ? ######################################### # BACKUP INTERVALS # # Must be unique and in ascending order # # i.e. hourly, daily, weekly, etc. # ######################################### interval hourly 6 interval daily 7 interval weekly 4 #interval monthly 3 If I use cron, should I disable them ? thanks ps. I've just realized that in the crontab I still have "hourly" and "daily". Should I then uncomment only the one I use in the crontab ? And what's the point to specify hourly if it is already specified in cron ? I'm a bit confused. # crontab -e 0 */4 * * * /usr/local/bin/rsnapshot hourly 30 23 * * * /usr/local/bin/rsnapshot daily

    Read the article

  • Is there a significant hit to a non .com TLDs exact match domain (EMD) names after Google's Panda update?

    - by ElHaix
    In this article, there is a good overview of exact match domain names and how they affect SEO after Google's Panda update. The last graph shows the Non-com EMD Influence, where it is suggested that a .com tld will perform better than a non-.com one. However, let's consider local search. In the US, .com's work great. However, let's say you're in Canada, and you have a .ca EMD, all with local, Canadian results. Would the expectation be that the .com equivalent still perform better? As a user I would expect the .ca results to be more relevant, and I'm wondering if anyone else has experience with this?

    Read the article

  • PHP+Apache as forward/reverse proxy: ¿how to process client requests and server responses in PHP?

    - by Lightworker
    Hi! I'm having a lot of troubles with the propper configuration of Apache mod_proxy.so to work as desired... The main idea, is to create a proxy on a local machine in a network wich will have the ability to proces a client request (client connected through this Apache prepared proxy) in PHP. And also, it will have the capacity to process the server responses on PHP too. Those are the 2 funcionalities, and they are independent one from each other. Let me present a little schema of what I need to achive: As you can see here, there're 2 ways: blue one and red one. For the blue one, I basically conected a client (Machine B - cell phone) on my local network (home) and configured it to go thorugh a proxy, wich is the Machine A (personal computer) on the exactly same network. So let's say (not DHCP): Machine A: 192.168.1.40 -- Apache is running on this machine, and configured to listen port 80. Machine B (cell phone): 192.168.1.75 -- configured to go throug a proxy, wich is IP 192.168.1.75 and port 80 (basically, Machine A). After configuring Apache properly, wich is basically to remove the "#" from httpd.conf on the lines for the mod_proxy.so (main worker), mod_proxy_connect.so (SSL, allowCONNECT, ...) and mod_proxy_http.so (needed for handle HTTP request/responses) and having in my case, lines like this: # Implements a proxy/gateway for Apache. Include "conf/extra/httpd-proxy.conf" # Various default settings Include "conf/extra/httpd-default.conf" # Secure (SSL/TLS) connections Include "conf/extra/httpd-ssl.conf" wich gives me the ability to configure the file httpd-proxy.conf to prepare the forward proxy or the reverse proxy. So I'm not sure, if what I need it's a forward proxy or a reverse one. For a forward proxy I've done this: <IfModule proxy_module> <IfModule proxy_http_module> # # FORWARD Proxy # #ProxyRequests Off ProxyRequests On ProxyVia On <Proxy *> Order deny,allow # Allow from all Deny from all Allow from 192.168.1 </Proxy> </IfModule> </IfModule> wich basically passes all the packets normally to the server and back to the client. I can trace it perfectly (and testing that works) looking at the "access.log" from Apache. Any request I make with the cell phone, appears then on the Apache log. So it works. But here come the problem: I need to process those client requests. And I need to do it, in PHP. I have read a lot about this. I've read in detail the oficial site from Apache about mod_proxy. And I've searched a lot on forums, but without luck. So I thought about a first aproximation: 1) Forward proxy in Apache, passes all the packets and it's not possible to process them. This seems to be true, so, what about a reverse proxy? So I envisioned something like: ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass http://www.google.com http://www.yahoo.com ProxyPassReverse http://www.google.com http://www.yahoo.com which is just a test, but this should cause on my cell phone that when trying to navigate to Google, I should be going to Yahoo, isn't it? But not. It doesn't work. So you really see, that ALL the examples on Apache reverse proxy, goes like: ProxyPass /foo http://foo.example.com/bar ProxyPassReverse /foo http://foo.example.com/bar wich means, that any kind of request in a local context, will be solved on a remote location. But what I needed is the inverse! It's that when asking for a remote site on my phone, I solve this request on my local server (the Apache one) to process it with a PHP module. So, if it's a forward proxy, I need to pass through PHP first. If it's a reverse proxy, I need to change the "going" direction to my local server one to process first on PHP. Then comes in mind second option: 2) I've seen something like: <Proxy http://example.com/foo/*> SetOutputFilter INCLUDES </Proxy> And I started to search for SetOutputFilter, SetInputFilter, AddOutputFilter and AddInputFilter. But I don't really know how can I use it. Seems to be good, or a solution to me, cause with somethin' like this, I should can add an Input filter to process on PHP the client requests and send back to the client what I programed/want (not the remote server response) wich is the BLUE path on schema, and I should have the ability to add an Output filter wich seems to give me the ability to process the remote server response befor sending it to the client, wich should be the RED path on the schema. Red path, it's just to read server responses and play with em. But nothing more. The Blue path, it's the important one. Cause I will send to the client whatever I want after procesing the requests. I so sorry for this amazingly big post, but I needed to explain it as well as I can. I hope someone will understand my problem, and will help me to solve it! Lot of thanks in advance!! :)

    Read the article

  • Referencing environment variables *in* /etc/environment?

    - by Stefan Kendall
    I recently discovered /etc/environment, which seems a more standard way to setup simple environment variables than scripts, but I was wondering if there was a way to back-reference environment variables in the /etc/environment file. That is, I have this: JAVA_HOME="/tools/java" GRAILS_HOME="/tools/grails" GROOVY_HOME="/tools/groovy" GRADLE_HOME="/tools/gradle" PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" If I try to add $JAVA_HOME/bin to the PATH definition, however, I get $JAVA_HOME/bin, and not the interpolated variable. To remedy this, I'm creating environment.sh in profile.d to add the /bin entries to the path, but this seems sloppy and disorganized. Is there a way to backreference the environment variables in /etc/environment?

    Read the article

  • Autoscaling in a modern world&hellip;. Part 3

    - by Steve Loethen
    The Wasabi Hands on Labs give you a good look at the basic mechanics, but I don’t find the setup too practical.  Using a local console application to host the Autoscaler and rules files is probably the (IMHO) least likely architecture.  Far more common would be hosting in a service on premise (if you want to have the Autoscaler local) or most likely, host it in a Azure role of it’s own.  I chose to go the Azure route. First step was to get the rules.xml and the services.xml files into the cloud.  I tend to be a “one step at a time” sort of guy, so running the console application with the rules sitting in a Azure hosted set of blobs seemed to be the logical first step.  Here are the steps: 1) Create a container in the storage account you wish to use.  Name does not matter, you will get a chance to set the container name (as well as the file names) in the app.config 2) Copy the two files from where you created them to your  container.  I used the same files I had locally.  I made the container public to eliminate security issues, but in the final application, a bit of security needs to be applied (one problem at a time).  The content type was set to text/xml.  I found one reference claiming the importance of this step, and it makes sense. 3) Adjust the app.config to set the location of the files.  This will let you set all the storage account and key information needed to reach into the cloud form your console application.  The sections of your app.config will look like this: <rulesStores> <add name="Blob Rules Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Rules.Configuration.BlobXmlFileRulesStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="rules.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </rulesStores> <serviceInformationStores> <add name="Blob Service Information Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.ServiceModel.Configuration.BlobXmlFileServiceInformationStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="services.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </serviceInformationStores> Once I had the files up in the sky, I renamed the local copies to just to make my self feel better about the application using the correct set of rules and services.  Deploy the web role to the cloud.  Once it is up and running, start the console application.  You should find the application scales up and down in response to the buttons on the web site.  Tune in next time for moving the hosting of the Autoscaler to a worker role, discussions on getting the logging information into diagnostics into storage, and a set of discussions about certs and how they play a role.

    Read the article

  • Working with Visual Studio Web Development Server and IE6 in XP Mode on Windows 7

    - by Igor Milovanovic
    (Brian Reiter from  thoughtful computing has described this setup in this StackOverflow thread. The credit for the idea is entirely his, I have just extended it with some step by step descriptions and added some links and screenhots.)   If you are forced  to still support Internet Explorer 6, you can setup following combination on your machine to make the development for it less painful. A common problem if you are developing on Windows 7 is that you can’t install IE6 on your machine. (Not that you want that anyway). So you will probably end up working locally with IE8 and FF, and test your IE6 compatibility on a separate machine. This can get quite annoying, because you will have to maintain two different development environments, not have all the tools available, etc.   You can help yourself by installing IE6 in a Windows 7 XP Mode, which is basically just an Windows XP running in a virtual machine.   [1] Windows XP Mode installation   After you have installed and configured your XP mode (remember the security settings like Windows Update and antivirus software), you can add the shortcut to the IE6 in the virtual machine to the “all users” start menu. This shortcut will be replicated to your windows 7 XP mode start menu, and you will be able to seamlessly start your IE 6 as a normal window on your Windows 7 desktop.   [2] Configure IE6 for the Windows 7 installation   If you configure your XP – Mode to use (Shared Networking)  NAT, you can now use IE6 to browse the sites in the internet. (add proxy settings to IE6 if necessary)                       The problem now is that you can’t connect to the webdev server which is running on your local machine. This is because web development server is crippled to allow only local connections for security reasons.   In order to trick webdev in believing that the requests are coming from local machine itself you can use a light weight proxy like privoxy on your host (windows 7) machine and configure the IE6 running in the virtual host.   The first step is to make the host machine (running windows 7) reachable from the virtual machine (running XP). In order to do that, you can install the loopback adapter, and configure it to use an IP which is routable from the virtual machine. In example screenshot (192.168.1.66).   [3] How to install loopback adapter in Windows 7   After installation you can assign a static IP which is routable from the virtual machine (in example 192.168.1.66)                     The next step is to configure privoxy to listen on that IP address (using some not used port, in example, the default port 8118)   Change following line in config.txt:   # #      Suppose you are running Privoxy on an IPv6-capable machine and #      you want it to listen on the IPv6 address of the loopback device: # #        listen-address [::1]:8118 # # listen-address  192.168.1.66:8118   The last step is to configure the IE6 to use Privoxy which is running on your Windows 7 host machine as proxy for all addresses (including localhost)                             And now you can use your Windows7 XP Mode IE6 to connect to your Visual Studio’s webdev web server.                         [4] http://stackoverflow.com/questions/683151/connect-remotely-to-webdev-webserver-exe

    Read the article

  • How Microsoft listens

    - by Stacy Vicknair
    This being my freshman year as an MVP, I had a realization that I perhaps should be embarrassed hasn’t happened sooner. The realization comes much like the iconic M&Ms commercial where the M&Ms run into Santa and exclaim, “He does exist!” My personal realization arguably has a greater implication: Microsoft does listen. This is the most important lesson that I received this year attending the MVP Summit. My hope is that I can convince you that we are empowered to make a difference. Instead of using “Man I hate how this works / doesn’t work!” as cooler conversation, we can use it as true interaction with Microsoft. We as customers to Microsoft need to stop asking the question “Will this work for me?” and instead ask “How can this work for me?” There are three quick resources that the average developer has access to today that they can use to be heard by the product teams, and by no means should you think twice if you have a concern that you’d like a real response on. MVPs MVPs are members of your community who have a deep relationship with Microsoft and will have connections to their associated product group. Don’t think of them as just a resource for answers, but also as your ambassador for getting your experiences heard. You can find your local MVPs by browsing the directory at: https://mvp.support.microsoft.com/communities/mvp.aspx Evangelists Evangelists are employees of Microsoft who work to foster and grow communities in their assigned region. They are first-class citizens of Microsoft and are often deeply involved with the product groups. As a result, they will be more than glad to direct your questions or concerns to those who can answer them most expertly. With that said, evangelists are also very busy people (who do amazing things for the community) and might not be able to get you that conversation as quickly as a local MVP. You can find your local evangelist at the following website: http://msdn.microsoft.com/en-us/bb905078.aspx Microsoft Connect This is one of the resources that I haven’t used enough, but it cannot be understated. Connect is the starting point of the social conversation that happens between Microsoft and the community daily. Connect acts as a portal where you can provide new feedback as well as comment and rate the feedback provided by others. Power is in numbers when it comes to Connect, so the exposure that your feedback can get not only lets you know that you aren’t the only one who wants change, but also lets Microsoft know the same. https://connect.microsoft.com   Technorati Tags: Microsoft,MVP,Feedback,Connect

    Read the article

  • Do You Develop Your PL/SQL Directly in the Database?

    - by thatjeffsmith
    I know this sounds like a REALLY weird question for many of you. Let me make one thing clear right away though, I am NOT talking about creating and replacing PLSQL objects directly into a production environment. Do we really need to talk about developers in production again? No, what I am talking about is a developer doing their work from start to finish in a development database. These are generally available to a development team for building the next and greatest version of your databases and database applications. And of course you are using a third party source control system, right? Last week I was in Tampa, FL presenting at the monthly Suncoast Oracle User’s Group meeting. Had a wonderful time, great questions and back-and-forth. My favorite heckler was there, @oraclenered, AKA Chet Justice.  I was in the middle of talking about how it’s better to do your PLSQL work in the Procedure Editor when Chet pipes up - Don’t do it that way, that’s wrong Just press play to edit the PLSQL directly in the database Or something along those lines. I didn’t get what the heck he was talking about. I had been showing how the Procedure Editor gives you much better feedback and support when working with PLSQL. After a few back-and-forths I got to what Chet’s main objection was, and again I’m going to paraphrase: You should develop offline in your SQL worksheet. Don’t do anything in the database until it’s done. I didn’t understand. Were developers expected to be able to internalize and mentally model the PL/SQL engine, see where their errors were, etc in these offline scripts? No, please give Chet more credit than that. What is the ideal Oracle Development Environment? If I were back in the ‘real world’ of database development, I would do all of my development outside of the ‘dev’ instance. My development process looks a little something like this: Do I have a program that already does something like this – copy and paste Has some smart person already written something like this – copy and paste Start typing in the white-screen-of-panic and bungle along until I get something that half-works Tweek, debug, test until I have fooled my subconscious into thinking that it’s ‘good’ As you might understand, I don’t want my co-workers to see the evolution of my code. It would seriously freak them out and I probably wouldn’t have a job anymore (don’t remind me that I already worked myself out of development.) So here’s what I like to do: Run a Local Instance of Oracle on my Machine and Develop My Code Privately I take a copy of development – that’s what source control is for afterall – and run it where no one else can see it. I now get to be my own DBA. If I need a trace – no problem. If I want to run an ASH report, no worries. If I need to create a directory or run some DataPump jobs, that’s all on me. Now when I get my code ‘up to snuff,’ then I will check it into source control and compile it into the official development instance. So my teammates suddenly go from seeing no program, to a mostly complete program. Is this right? If not, it doesn’t seem wrong to me. And after talking to Chet in the car on the way to the local cigar bar, it seems that he’s of the same opinion. So what’s so wrong with coding directly into a development instance? I think ‘wrong’ is a bit strong here. But there are a few pitfalls that you might want to look out for. A few come to mind – and I’m sure Chet could add many more as my memory fails me at the moment. But here goes: Development instance isn’t properly backed up – would hate to lose that work Development is wiped once a week and copied over from Prod – don’t laugh Someone clobbers your code You accidentally on purpose clobber someone else’s code The more developers you have in a single fish pond, the greater chance something ‘bad’ will happen This Isn’t One of Those Posts Where I Tell You What You Should Be Doing I realize many shops won’t be open to allowing developers to stage their own local copies of Oracle. But I would at least be aware that many of your developers are probably doing this anyway – with or without your tacit approval. SQL Developer can do local file tracking, but you should be using Source Control too! I will say that I think it’s imperative that you control your source code outside the database, even if your development team is comprised of a single developer. Store your source code in a file, and control that file in something like Subversion. You would be shocked at the number of teams that do not use a source control system. I know I continue to be shocked no matter how many times I meet another team running by the seat-of-their-pants. I’d love to hear how your development process works. And of course I want to know how SQL Developer and the rest of our tools can better support your processes. And one last thing, if you want a fun and interactive presentation experience, be sure to have Chet in the room

    Read the article

  • teamviewer in "client only" mode

    - by Tommy
    I have a remote PC with the teamviewer on it. I want to use my local teamviewer installation only for connecting to that remote PC, and I don't want any services (teamviewerd) running on my local Ubuntu 14.04. I've tried to disable the service by doing sudo teamviewer --daemon disable but after that I'm not able to connect to the remote PC, because teamviewer reports an error "The Teamviewer daemon is not running!" on start, and I'm not able to access the gui. Is there a way to connect to the remote PC with teamviewer with no teamviewerd running?

    Read the article

  • Different behaviour with windows authentication on IIS7 websites

    - by amaters
    I need to run a website with just windows authentication. Given the following situation: The location of the default website is: c:\inetpub\wwwroot The location of my code is: c:\Sites\WebApp my hostfile is edited so any .local i use points to 127.0.0.1 I have created a new application called 'AppX' underneath the default website and point it to c:\Sites\WebApp. It will use the DefaultappPool. When I switch off anonymous and switch on windows authentication all works well when I go to localhost/AppX/. What i really want is a new website (No need to question why I want this). So I created Website2 and did exact the same creation of the application. Everything is the same; destination, app pool and authentication. Now when I browse to this website web2.local/AppX/ I get the 401.2 - Unauthorized error. What am I missing here?

    Read the article

  • Install of AppFabric RC stops AppFabric Monitoring (some traps for young players)

    - by Rob Addis
    I uninstalled AppFabric Beta 2 and installed AppFabric RC. The AppFabricEventCollection Service is started (runs under Local Service which is a dbo_owner on the Monitoring Database to prove this wasn’t the issue). The SQLServerAgent Service is started. Nothing is being written to the Monitoring DB Staging Table and thus nothing is being written to the Event tables or seen in the AppFabric Dashboard. Nothing has been written to the following event logs     - Microsoft-Windows-Application Server-System Services\Admin     - Microsoft-Windows-Application Server-System Services\Operational The Microsoft-Windows-Application Server-System Services\Debug event log is not shown in the event viewer. The WCF configuration appears fine the connection string to the Monitoring DB is correct. Monitoring is set to “Trouble Shooting” and no errors are shown on the “Configure WCF and WF for Application” dialog. So the problem seems to lie with either AppFabric which writes to the event log or the AppFabricEventCollection Service. I thought I was flummoxed... However one of my colleagues said have you checked the etwProviderId? I was using a config which was created under AppFabric  Beta 2 which had a different etwProviderId. So I deleted the following section and all other references to AppFabric monitoring from the web.config and then recreated them using IIS the “Configure WCF and WF for Application” dialog and set the level to TroubleShooting.         <diagnostics etwProviderId="6b44a7ff-9db4-4723-b8cf-1b584bf1591b">             <endToEndTracing propagateActivity="true" messageFlowTracing="true" />         </diagnostics>   I then called a service to create some log entries. Still nothing was written to the Monitoring DB Staging Table... I checked the Microsoft-Windows-Application Server-System Services\Admin event log. It had the following entry... Configuration error. Please see the details to correct the problem. \rDetailed information:\r Filename: \\?\C:\Users\xxx\Documents\dotnetdev\Frameworks\SOA\xxx.SOA.Framework\xxx.SOA.Framework.MockServices\SimpleServiceParent\web.config Error: Cannot read configuration file due to insufficient permissions    System.UnauthorizedAccessException: Filename: \\?\C:\Users\xxx\Documents\dotnetdev\Frameworks\SOA\xxx.SOA.Framework\IAG.SOA.Framework.MockServices\SimpleServiceParent\web.config Error: Cannot read configuration file due to insufficient permissions   And guess who the user was... Local Service yes yes I should have used a better User in the AppFabric RC setup to run the AppFabricEventCollection Service under! So I changed the user to a more appropriate one and removed Local Service as a DBO and hay presto!

    Read the article

  • CUPS basic auth error through web interface

    - by Inaimathi
    I'm trying to configure CUPS to allow remote administration through the web interface. There's enough documentation out there that I can figure out what to change in my cupsd.conf (changing Listen localhost:631 to Port 631, and adding Allow @LOCAL to the /, /admin and /admin/conf sections). I'm now at the point where I can see the CUPS interface from another machine on the same network. The trouble is, when I try to Add Printer, I'm asked for a username and password, but my response is rejected even when I know I've gotten it right (I assume it's asking for the username and password of someone in the lpadmin group on the server machine; I've sshed in with credentials its rejecting, and the user I'm using has been added to the lpadmin group). If I disable auth outright, by changing DefaultAuthType Basic to DefaultAuthType None, I get an "Unauthorized" error instead of a password request when I try to Add Printer. What am I doing wrong? Is there a way of letting users from the local network to administer the print server through the CUPS web interface? EDIT: By request, my complete cupsd.conf (spoiler: minimally edited default config file that comes with the edition of CUPS from the Debian wheezy repos): LogLevel warn MaxLogSize 0 SystemGroup lpadmin Port 631 # Listen localhost:631 Listen /var/run/cups/cups.sock Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS dnssd # DefaultAuthType Basic DefaultAuthType None WebInterface Yes <Location /> Order allow,deny Allow @LOCAL </Location> <Location /admin> Order allow,deny Allow @LOCAL </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow @LOCAL </Location> # Set the default printer/job policies... <Policy default> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> # Set the authenticated printer/job policies... <Policy authenticated> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy>

    Read the article

  • RTS Movement + Navigation + Destination

    - by Oliver Jones
    I'm looking into building my own simple RTS game, and I'm trying to get my head around the movement of single, and multi selected units. (Developing in Unity) After much research, I now know that its a bigger task than I thought. So I need to break it down. I already have an A* navigation system with static obstacles taken into account. I don't want to worry about dynamic local avoidance right now. So I guess my first break down question would be: How would I go about moving mutli units to the same location. Right now - my units move to the location, but because they're all told to go to the same location, they start to 'fight' over one another to get there. I think theres two paths to go down: 1) Give each individual unit a separate destination point that is close to the 'master' destination point - and get the units to move to that. 2) Group my selected units in a flock formation, and move that entire flock group towards the destination point. Question about each path: 1a) How can I go about finding a suitable destination point that is close to the master destination? What happens if there isn't a suitable destination point? 1b) Would this be more CPU heavy? As it has to compute a path for each unit? (40 unit count). 2a) Is this a good idea? Not giving the units themselves a destination, but instead the flock (which holds the units within). The units within the flock could then maintain a formation (local avoidance) - though, again local avoidance is not an issue at this current time. 2b) Not sure what results I would get if I have a flock of 5 units, or a flock of 40 units, as the radius would be greater - which might mess up my A* navigation system. In other words: A flock of 2 units will be able to move down an alleyway, but a flock of 40 wont. But my nav system won't take that into account. I would appreciate any feedback. Kind regards, Ollie Jones

    Read the article

  • Ubuntu Server available updates

    - by Rapture
    In Ubuntu 11.04 Server when I would log in via ssh it would tell me how many packages are available for updating in the welcome message. After upgrading to 11.10 I no longer get that information. Is there a package I need to install or a config file that needs changing? 11.04 output: Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-12-generic x86_64) * Documentation: https://help.ubuntu.com/ 32 packages can be updated. 8 updates are security updates. Last login: Mon Nov 21 16:19:01 2011 from han-solo.local 11.10 output: Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-12-server x86_64) * Documentation: https://help.ubuntu.com/11.10/serverguide/C No mail. Last login: Tue Nov 22 19:07:19 2011 from han-solo.local

    Read the article

  • Failed to compile Network Manager 0.9.4

    - by Oleksa
    After upgrading to 12.04 I needed to re-compile Network Manager to the version 0.9.4.0 again. However with the version 9.4.0 I faced with the error during compilation with libdns-manager: $ make ... Making all in dns-manager make[4]: ????? ? ??????? "/home/stasevych/install/network-manager/nm0.9.4.0/network-manager-0.9.4.0/src/dns-manager" /bin/bash ../../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I../.. -I../../src/logging -I../../libnm-util -I../../libnm-util -I../../src -I../../include -I../../include -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/dbus-1.0 -I/usr/lib/x86_64-linux-gnu/dbus-1.0/include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -DLOCALSTATEDIR=\"/usr/local/var\" -Wall -std=gnu89 -g -O2 -Wshadow -Wmissing-declarations -Wmissing-prototypes -Wdeclaration-after-statement -Wfloat-equal -Wno-unused-parameter -Wno-sign-compare -fno-strict-aliasing -Wno-unused-but-set-variable -Wundef -Werror -MT libdns_manager_la-nm-dns-manager.lo -MD -MP -MF .deps/libdns_manager_la-nm-dns-manager.Tpo -c -o libdns_manager_la-nm-dns-manager.lo `test -f 'nm-dns-manager.c' || echo './'`nm-dns-manager.c libtool: compile: gcc -DHAVE_CONFIG_H -I. -I../.. -I../../src/logging -I../../libnm-util -I../../libnm-util -I../../src -I../../include -I../../include -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/dbus-1.0 -I/usr/lib/x86_64-linux-gnu/dbus-1.0/include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -DLOCALSTATEDIR=\"/usr/local/var\" -Wall -std=gnu89 -g -O2 -Wshadow -Wmissing-declarations -Wmissing-prototypes -Wdeclaration-after-statement -Wfloat-equal -Wno-unused-parameter -Wno-sign-compare -fno-strict-aliasing -Wno-unused-but-set-variable -Wundef -Werror -MT libdns_manager_la-nm-dns-manager.lo -MD -MP -MF .deps/libdns_manager_la-nm-dns-manager.Tpo -c nm-dns-manager.c -fPIC -DPIC -o .libs/libdns_manager_la-nm-dns-manager.o mv -f .deps/libdns_manager_la-nm-dns-manager.Tpo .deps/libdns_manager_la-nm-dns-manager.Plo /bin/bash ../../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I../.. -I../../src/logging -I../../libnm-util -I../../libnm-util -I../../src -I../../include -I../../include -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/dbus-1.0 -I/usr/lib/x86_64-linux-gnu/dbus-1.0/include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -DLOCALSTATEDIR=\"/usr/local/var\" -Wall -std=gnu89 -g -O2 -Wshadow -Wmissing-declarations -Wmissing-prototypes -Wdeclaration-after-statement -Wfloat-equal -Wno-unused-parameter -Wno-sign-compare -fno-strict-aliasing -Wno-unused-but-set-variable -Wundef -Werror -MT libdns_manager_la-nm-dns-dnsmasq.lo -MD -MP -MF .deps/libdns_manager_la-nm-dns-dnsmasq.Tpo -c -o libdns_manager_la-nm-dns-dnsmasq.lo `test -f 'nm-dns-dnsmasq.c' || echo './'`nm-dns-dnsmasq.c libtool: compile: gcc -DHAVE_CONFIG_H -I. -I../.. -I../../src/logging -I../../libnm-util -I../../libnm-util -I../../src -I../../include -I../../include -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/libnl3 -I/usr/include/dbus-1.0 -I/usr/lib/x86_64-linux-gnu/dbus-1.0/include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -DLOCALSTATEDIR=\"/usr/local/var\" -Wall -std=gnu89 -g -O2 -Wshadow -Wmissing-declarations -Wmissing-prototypes -Wdeclaration-after-statement -Wfloat-equal -Wno-unused-parameter -Wno-sign-compare -fno-strict-aliasing -Wno-unused-but-set-variable -Wundef -Werror -MT libdns_manager_la-nm-dns-dnsmasq.lo -MD -MP -MF .deps/libdns_manager_la-nm-dns-dnsmasq.Tpo -c nm-dns-dnsmasq.c -fPIC -DPIC -o .libs/libdns_manager_la-nm-dns-dnsmasq.o nm-dns-dnsmasq.c: In function 'update': nm-dns-dnsmasq.c:274:2: error: passing argument 1 of 'g_slist_copy' discards 'const' qualifier from pointer target type [-Werror] /usr/include/glib-2.0/glib/gslist.h:82:10: note: expected 'struct GSList *' but argument is of type 'const struct GSList *' cc1: all warnings being treated as errors make[4]: *** [libdns_manager_la-nm-dns-dnsmasq.lo] ??????? 1 make[4]: ??????? ??????? "/home/stasevych/install/network-manager/nm0.9.4.0/network-manager-0.9.4.0/src/dns-manager" make[3]: *** [all-recursive] ??????? 1 make[3]: ??????? ??????? "/home/stasevych/install/network-manager/nm0.9.4.0/network-manager-0.9.4.0/src" make[2]: *** [all] ??????? 2 make[2]: ??????? ??????? "/home/stasevych/install/network-manager/nm0.9.4.0/network-manager-0.9.4.0/src" make[1]: *** [all-recursive] ??????? 1 make[1]: ??????? ??????? "/home/stasevych/install/network-manager/nm0.9.4.0/network-manager-0.9.4.0" make: *** [all] ??????? 2 Has anybody faced with the similar errors? Thank you in advance for your help.

    Read the article

  • Exporting .FBX model into XNA - unorthogonal bones

    - by Sweta Dwivedi
    I create a butterfly model in 3ds max with some basic animation, however trying to export it to .FBX format I get the following exception.. any idea how i can transform the wings to be orthogonal.. One or more objects in the scene has local axes that are not perpendicular to each other (non-orthogonal). The FBX plug-in only supports orthogonal (or perpendicular) axes and will not correctly import or export any transformations that involve non-perpendicular local axes. This can create an inaccurate appearance with the affected objects: -Right.Wing I have attached a picture for reference . . .

    Read the article

  • FBX SDK Not Converting Child Node Coordinate Systems

    - by Al Bundy
    I am trying to import a scene into my application from an fbx file. In 3DS Max, the scene and it’s local translations are as follows: Root (0, 0, 0) '-Sphere001 (-15, 30, 0) ' '-Sphere002 (-2, -30, 0) ' '-Sphere003 (-30, -20, 0) '-Cube001 (35, -15, 0) This is the code that I am using to get the translations of each node: FbxDouble3 fbxPosition = pChild->LclTranslation.Get(); FbxDouble3 fbxRotation = pChild->LclRotation.Get(); FbxDouble3 fbxScale = pChild->LclScaling.Get(); When I try to import the scene, the first node from the scene is getting converted to a right handed system, using this conversion: (X, Z, -Y), but none of their child nodes are. after importing the scene, the local translations I get are as follows: Root (0, 0, 0) --Sphere001 (-15, 0, -30) - converted ----Sphere002 (-2, -30, 0) - not converted ------Sphere003 (-30, -20, 0) - not converted --Cube001 (35, 0, 15) - converted Can anybody help me make sense of this? Thanks

    Read the article

  • BizTalk 2009 - Installing BizTalk Server 2009 on XP for Development

    - by StuartBrierley
    At my previous employer, when developing for BizTalk Server 2004 using Visual Studio 2003, we made use of separate development and deployment environments; developing in Visual Studio on our client PCs and then deploying to a seperate shared BizTalk 2004 Server from there.  This server was part of a multi-server Standard BizTalk environment comprising of separate BizTalk Server 2004 and SQL Server 2000 servers.  This environment was implemented a number of years ago by an outside consulting company, and while it worked it did occasionally cause contention issues with three developers deploying to the same server to carry out unit testing! Now that I am making the design and implementation decisions about the environment that BizTalk will be developed in and deployed to, I have chosen to create a single "server" installation on my development PC, installling SQL Server 2008, Visual Studio 2008 and BizTalk Server 2009 on a single system.  The client PC in use is actually a MacBook Pro running Windows XP; not the most powerful of systems for high volume processing but it should be powerful enough to allow development and initial unit testing to take place. I did not need to, and so chose not to, install all of the components detailed in the Microsoft guide for installing BizTalk 2009 on Windows XP but I did follow the basics of the procedures detailed within.  Outlined below are the highlights of this process and any details of what choices I made.   Install IIS I had previsouly installed Windows XP, including all current service packs and critical updates.  At the time of installation this included Service Pack 3, the .Net Framework 3.5 and MS Windows Installer 3.1.  Having a running XP system, my first step was to install IIS - this is quite straightforward and posed no difficulties. Install Visual Studio 2008 The next step for me was to install Visual Studio 2008.  Making sure to select a custom installation is crucial at this point, as you need to make sure that you deselect SQL Server 2005 Express Edition as it can cause the BizTalk installation to fail.  The installation guide suggests that you only select Visual C# when selecting features to install, but  I decided that due to some legacy systems I have code for that I would also select the VB and ASP options. Visual Studio 2008 Service Pack 1 Following the completion of the installation of Visual Studio itself you should then install the Visual Studio 2008 Service Pack 1. SQL Server 2008 Standard Edition The next step before intalling BizTalk Server 2009 itself is to install SQL Server 2008 Standard Edition. On the feature selection screen make sure that you select the follwoing options: Database Engine Services SQL Server Replication Full-Text Search Analysis Services Reporting Services Business Intelligence Development Studio Client Tools Connectivity Integration Services Management Tools Basic and Complete Use the default instance and the same accounts for all SQL server instances - in my case I used the Network Service and Local Service accounts for the two sets of accounts. On the database engine configuration screen I selected windows authentication and added the current user, adding the same user again on the Analysis services Configuration screen.  All other screens were left on the default settings. The SQL Server 2008 installation also included the installation of hotfix for XP KB942288-v3, the Windows Installer 4.5 Redistributable. System Configuration At this stage I took a moment to disable the SQL Server shared memory protocol and enable the Named Pipes and TCP/IP protocols.  These can be found in the SQL Server Configuration Manager > SQL Server Network Configuration > Protocols for MSSQLServer.  I also made sure that the DTC settings were configured correctley.   BizTalk Server 2009 The penultimate step is to install BizTalk Server 2009 Standard Edition. I had previsouly downloaded the redistributable prerequisites as a CAB file so was able to make use of this when carrying out the installation. When selecting which components to install I selected: Server Runtime BizTalk EDI/AS2 Runtime WCF Adapter Runtime Portal Components Administrative Tools WFC Administartion Tools Developer Tools and SDK, Enterprise SSO Administration Module Enterprise SSO Master Secret Server Business Rules Components BAM Alert Provider BAM Client BAM Eventing Once installation has completed clear the launch BizTalk Server Configuration check box and select finish. Verify the Installation Before configuring BizTalk Server it is a good idea to check that BizTalk Server 2009 is installed and that SQL Server 2008 has started correctly.  The easiest way to verify the BizTalk installation is check the Programs and Features in Control panel.  Check that SQL is started by looking in the SQL Server Configuration Manager. Configure BizTalk Server 2009 Finally we are ready to configure BizTalk Server 2009.  To start this I opted for a custom configuration that allowed me to choose in more detail the settings to be used. For all databases I selected the local server and default database names. For all Accounts I used a local account that had been created specifically for the BizTalk Services. For all windows groups I allowed the configuration wizard to create the default local groups. The configuration wizard then ran:   Upon completion you will be presented with a screen detailing the success or failure of the configuration.  If your configuration failed you will need to sort out the issues and try again (it is possible to save the configuration settings for later use if you want too - except passwords of course!).  If you see lots of nice green ticks - congratulations BizTalk Server 2009 on XP is now installed and configured ready for development.

    Read the article

  • GlassFish Clustering with DCOM on Windows

    - by ByronNevins
    DCOM - Distributed COM, a Microsoft protocol for communicating with Windows machines. Why use DCOM? In GlassFish 3.1 SSH is used as the standard way to run commands on remote nodes for clustering.  It is very difficult for users to get SSH configured properly on Windows.  SSH does not come with Windows so we have to depend on third party tools.  And then the user is forced to install and configure these tools -- which can be tricky. DCOM is available on all supported platforms.  It is built-in to Windows. The idea is to use DCOM to communicate with remote Windows nodes.  This has the huge advantage that the user has to do minimal, if any, configuration on the Windows nodes. Implementation HighlightsTwo open Source Libraries have been added to GlassFish: Jcifs – a SAMBA implementation in Java J-interop – A Java implementation for making DCOM calls to remote Windows computers.   Note that any supported platform can use DCOM to work with Windows nodes -- not just Windows.E.g. you can have a Linux DAS work with Windows remote instances.All existing SSH commands now have a corresponding DCOM command – except for setup-ssh which isn’t needed for DCOM.  validate-dcom is an all new command. New DCOM Commands create-node-dcom delete-node-dcom install-node-dcom list-nodes-dcom ping-node-dcom uninstall-node-dcom update-node-dcom validate-dcom setup-local-dcom (This is only available via Update Center for GlassFish 3.1.2) These commands are in-place in the trunk (4.0).  And in the branch (3.1.2) Windows Configuration Challenges There are an infinite number of possible configurations of Windows if you look at it as a combination of main release, service-pack, special drivers, software, configuration etc.  Later versions of Windows err on the side of tightening security be default.  This means that the Windows host may need to have configuration changes made.These configuration changes mostly need to be made by the user.  setup-local-dcom will assist you in making required changes to the Windows Registry.  See the reference blogs for details. The validate-dcom Command validate-dcom is a crucial command.  It should be run before any other commands.  If it does not run successfully then there is no point in running other commands.The validate-dcom command must be used from a DAS machine to test a different Windows machine.  If  validate-dcom runs successfully you can be confident that all the DCOM commands will work.  Conversely, the opposite is also true:  If validate-dcom fails, then no DCOM commands will work. What validate-dcom does Verify that the remote host is not the local machine. Resolves the remote host name Checks that the remote DCOM port is being listened on (135, 139) Checks that the remote host’s File Sharing is enabled (port 445) It copies a file (a script) to the remote host to verify that SAMBA is working and authorization is correct It runs a script that it copied on-the-fly to the remote host. Tips and Tricks The bread and butter commands that use DCOM are existing commands like create-instance, start-instance etc.   All of the commands that have dcom in their name are for dealing with the actual nodes. The way the software works is to call asadmin.bat on the remote machine and run a command.  This means that you can track these commands easily on the remote machine with the usual tools.  E.g. using AS_LOGFILE, looking at log files, etc.  It’s easy to attach a debugger to the remote asadmin process, “just in time”, if necessary. How to debug the remote commands:Edit the asadmin.bat file that is in the glassfish/bin folder.  Use glassfish/lib/nadmin.bat in GlassFish 4.0+Add these options to the java call:-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1234  Now if you run, say start-instance on DAS, you can attach your debugger, at your leisure, to the remote machines port 1234.  It will be running start-local-instance and patiently waiting for you to attach.

    Read the article

  • Is there any way to add a new location to the list of places where nltk looks for the wordnet corpus?

    - by Programming Noob
    I can't use the nltk wordnet lemmatizer because I can't download the wordnet corpus on my university computer due to access rights issues. I get the following error when I try to do so: ********************************************************************** Resource 'corpora/wordnet' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download() Searched in: - '/home/XX/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' ********************************************************************** When I had the same issue at home, I could resolve it by two ways: Using nltk.download(), the standard way and Creating a new folder at location /home/XX/nltk_data and just pasting the corpus directory inside it. Now at the university I only have access to /home/XX/bin and not /home/XX directly. So is there anyway I could paste the wordnet corpus into /home/XX/bin and then somehow make nltk look for the corpus in that folder?

    Read the article

  • Assembly Resources Expression Builder

    - by João Angelo
    In ASP.NET you can tackle the internationalization requirement by taking advantage of native support to local and global resources used with the ResourceExpressionBuilder. But with this approach you cannot access public resources defined in external assemblies referenced by your Web application. However, since you can extend the .NET resource provider mechanism and create new expression builders you can workaround this limitation and use external resources in your ASPX pages much like you use local or global resources. Finally, if you are thinking, okay this is all very nice but where’s the code I can use? Well, it was too much to publish directly here so download it from Helpers.Web@Codeplex, where you also find a sample on how to configure and use it.

    Read the article

  • Connecting Windows 7 to legacy Linux Samba share

    - by bconlon
    I have had to rebuild my Windows 7 PC and all has gone fairly well until I tried to connect to a Samba share on a legacy Linux box running Redhat 8. No matter what combination of domain / user /password I would just see the same message of: "The specified network password is not correct." This is a misleading error, very annoying and a little confusing until I found a hint that Windows 7 default authentication was not supported on older Samba implementations. I guess I figured this out once before as it used to work before the rebuild! Anyway here is the solution: 1. Control Panel->System and Security->Administrative Tools->Local Security Policy (or run secpol.msc). 2. Select Local Policies->Security Options->Network security: LAN Manager authentication level. 3. Select 'Send LM and NTLM - use NTLMv2 session security if negotiated' and click OK. #

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >