Search Results

Search found 22716 results on 909 pages for 'network architecture'.

Page 403/909 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Why is my router assigning the wrong address via DHCP

    - by Barry
    I have a WRT54G router that is set up to serve out addresses via DHCP. It correctly serves up addresses to every other machine on the network, including another PC, my macbook when connected via wireless, my wife's notebook, and our printer. However, whenever I attach my macbook to the router via an ethernet cable, the address it is given via DHCP is wrong. My local network is set up as 192.168.1.*. However, when my macbook connects with an ethernet cable, it is given the IP 192.168.29.*. Currently, I have the macbook set up with a manual IP address, and all seems to be working fine. Any ideas on what could be causing this?

    Read the article

  • Set up router to vpn into proxy server

    - by NKimber
    I have a small network with a single LinkSys router connected to broadband in US via Comcast. I have a VPN proxy server account that I can use with a standard Windows connection, allowing me to have a geographic IP fingerprint in Europe, this is useful for a number of purposes. I want to setup a 2nd router that automatically connects via VPN to this proxy service, so any hardware that is connected to router 2 looks as though it is originating network requests in Europe, and any hardware connected to my main router has normal Comcast traffic (all requests are originating from USA). My 2nd router is a LinkSys WRT54G2, I'm having trouble getting this configured. Question, is what I'm trying to do even feasible? Should the WRT54G2 be able to do this with native functionality? Would flashing it with DD-WRT allow me to achieve my objectives? Any help/suggestions much appreciated. Neil

    Read the article

  • Track IP Messenger's chatting by wireshark

    - by Kumar P
    We have Linux server ( RHEL 5 ), and some client machines ( Windows XP ) in local area network. We using server as proxy server. I am using squid proxy. My windows machines using internet by proxy. Now my client machines using IP messenger for chatting and sharing files with in local network. How can i trace what they are doing or chatting by ip messenger, from my server by wireshark packet sniffer ? If i can't do it by wireshark , What will you give idea about it...

    Read the article

  • Thoughts on Thoughts on TDD

    Brian Harry wrote a post entitled Thoughts on TDD that I thought I was going to let lie, but I find that I need to write a response. I find myself in agreement with Brian on many points in the post, but I disagree with his conclusion. Not surprisingly, I agree with the things that he likes about TDD. Focusing on the usage rather than the implementation is really important, and this is important whether you use TDD or not. And YAGNI was a big theme in my Seven Deadly Sins of Programming series. Now, on to what he doesnt like. He says that he finds it inefficient to have tests that he has to change every time he refactors. Here is where we part company. If you are having to do a lot of test rewriting (say, more than a couple of minutes work to get back to green) *often* when you are refactoring your code, I submit that either you are testing things that you dont need to test (internal details rather than external implementation), your code perhaps isnt as decoupled as it could be, or maybe you need a visit to refactorers anonymous. I also like to refactor like crazy, but as we all know, the huge downside of refactoring is that we often break things. Important things. Subtle things. Which makes refactoring risky. *Unless* we have a set of tests that have great coverage. And TDD (or Example-based Design, which I prefer as a term) gives those to us. Now, I dont know what sort of coverage Brian gets with the unit tests that he writes, but I do know that for the majority of the developers Ive worked with and I count myself in that bucket the coverage of unit tests written afterwards is considerably inferior to the coverage of unit tests that come from TDD. For me, it all comes down to the answer to the following question: How do you ensure that your code works now and will continue to work in the future? Im willing to put up with a little efficiency on the front side to get that benefit later. Its not the writing of the code thats the expensive part, its everything else that comes after. I dont think that stepping through test cases in the debugger gets you what you want. You can verify what the current behavior is, sure, and do it fairly cheaply, but you dont help the guy in the future who doesnt know what conditions were important if he has to change your code. His second part that he doesnt like backing into an architecture (go read to see what he means). Ive certainly had to work with code that was like this before, and its a nightmare the code that nobody wants to touch. But thats not at all the kind of code that you get with TDD, because if youre doing it right youre doing the write a failing tests, make it pass, refactor approach. Now, you may miss some useful refactorings and generalizations for this, but if you do, you can refactor later because you have the tests that make it safe to do so, and your code tends to be easy to refactor because the same things that make code easy to write unit tests for make it easy to refactor. I also think Brian is missing an important point. We arent all as smart as he is. Im reminded a bit of the lesson of Intentional Programming, Charles Simonyis paradigm for making programming easier. I played around with Intentional Programming when it was young, and came to the conclusion that it was a pretty good thing if you were as smart as Simonyi is, but it was pretty much a disaster if you were an average developer. In this case, TDD gives you a way to work your way into a good, flexible, and functional architecture when you dont have somebody of Brians talents to help you out. And thats a good thing.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Gateway ZA8 Netbook graphics Issue

    - by Hansel
    The graphics keep tearing and are barely usable anytime I try to use my Netbook. And I did a full install with Ubuntu so I'm pretty much stuck. These are the specs of the Netbook: Processor AMD Athlon™ 64 L110 Single-Core Processor (1.2GHz, 800MHz FSB, 512KB L2 Cache)6 Operating System Genuine Windows Vista® Home Basic (32-bit) with SP1 Display 11.6" HD WXGA Ultrabright™ LED-backlit Display (1366 x 768 resolution, 16:9 aspect ratio)7 Memory 2048MB DDR2 533MHz SDRAM Single Channel Memory8 Hard Drive 250GB SATA hard drive2 Color Classic and Elegant Design with Cherry Red finish Wireless Network 802.11b/g Wi-Fi CERTIFIED®3 Adapter AC Adapter Application Software Microsoft® Works, Microsoft® Money Essentials, Microsoft® Office Home and Student 2007 (60-day complimentary trial period)1 Battery 6-Cell Lithium Ion (5200mAh) Chassis Chassis with ATI Radeon® X1270 Graphics and AMD RS690E Chipset8 Dimensions (Box) 3.1" (H) x 14.8" (W) x 10.1" (D) or 80mm (H) x 376mm (W) x 256mm (D) Dimensions (System) 1.03" (H) x 11.26" (W) x 7.99" (D) or 26.4mm (H) x 286mm (W) x 203mm (D) External Ports (3) USB 2.0, VGA Connector Keyboard and Mouse Keyboard with Multi-Gesture Touchpad Media Card Reader Multi-in-1 Digital Media Card Reader (Memory Stick®, Memory Stick Pro™, MultiMediaCard, Secure Digital™, xD-Picture Card™) Network 10/100 Ethernet LAN (RJ-45 port) What can I do?

    Read the article

  • wmware fusion 5.0.1, LAN, disconnecting from server [closed]

    - by Maxim
    Currently I am using : OSX 10.7.5 VMWare:5.0.1 with windows 7 professional Problem: Getting disconnected from LAN game. Description: 4 of us are trying to play frozen throne on the same network. The thing is, they are all running windows computers, and I am running W7 on VMware. I haven't made any adjustments other than changing from "sharing internet with my mac" to Bridged network "autodetect". When we start to play, everything works then all of a sudden I get disconnected. This happens every once in a while, the timing differs from time to time. None of the others ever disconnects. What could be the problem? Thanks for the help!

    Read the article

  • Is it possible to create a read-only user account for security auditing purposes?

    - by user2529583
    An organization requires several administrators to have a role of a security auditor. They must have read-only (via network/remote) access to Windows Server 2008 / R2 systems and have permissions to view the server configuration. They must not be able to make any other changes to the server or the network, like restarting or making any configuration chanages. However I can't find any built-in settings for a user like this. The closest thing is the "Users" user group [1], however from my understanding every user in the domain is in this group and cannot view the domain server's configuration. So, what are other options of implementing a read-only user account in Windows Server 2008? [1] http://technet.microsoft.com/en-us/library/cc771990.aspx

    Read the article

  • JD Edwards Apps in a Box - Update

    - by Hartmut Wiese
    Summary and clarification JD Edwards Apps in a box is a Partner offering to the customer. We as Oracle have a huge interest in getting a successful offering to the market and we help the Partner building their offering. We provide components like JD Edwards EnterpriseOne and the Hardware. The Business Partner adds the installation services and position this as a solution to the market for a single price. As you know JD Edwards EnterpriseOne can run on multiple hardware platforms. Linux/X-86 version As you all know we do have JD Edwards VM Templates available from Oracle for the X-86 architecture. Each Partner should or is already able to install JD Edwards EnterpriseOne using these images from our software delivery cloud. We built a master bill of material for a X3-2 Hardware configuration now. It has been uploaded on the Community Workspace now. This is a SUGGESTION and limited to 50 Users MAX. However I strongly recommend you to do a sizing as usual and verify the configuration for each opportunity individually. T4-1/X3-2 version Oracle is not providing similar images for the T4-1 SPARC / SOLARIS architecture. There is an Optimized Solution Team inside Oracle who has created an Optimized Solution for JD Edwards some time ago. They created a whitepaper which is still available to download. This whitepaper was used as a starting point however we decided to build a new version of it using the latest Software and Hardware available. This has now been finalized and we are happy to provide this to our partners. This image is more a service we provide for each partner which they can reuse and extend based on their individual offerings. It is not an official supported Oracle Product and cannot be used to deploy to customers immediately. You cannot resell “JDE in a box”. You can use these images to save time while building your own Go-to-Market offering. You might want to add functionality like Mobility. It is also not complete as also the Deployment Server needs to be configured individually at the customer site. We will create some documentation about: what this images contains (and what not)? what final installation activities needs to be provided by each VAD/Partner in this process?  I will send an email to the community once we are ready to share it. You find these assets than in the Community Workspace. The Business Model with Oracle Hardware For those who have not done any Hardware business with Oracle yet: Usually a HW reseller orders the hardware through a Value Add Distributors (VAD) and not from Oracle directly. Each Partner needs to have Hardware Resell rights to do so. The VAD is assembling the boxes according to the needs of each customer. It is easily possible for them to prepare the boxes with the images we/you provide. However the final configuration is something a reseller/implementer needs to do at the customer site. This process is not the same in the EMEA region. Sometimes a VAD are taking the order but they do not see the Hardware at all. In those cases a VAD cannot provide any help with the pre-loading of any images and the reseller/implementer needs to do that. In some countries we do not have VADs at all.

    Read the article

  • Per-machine decentralised DNS caching - nscd/lwresd/etc

    - by Dan Carley
    Preface: We have caching resolvers at each of our geographic network locations. These are clustered for resiliency and their locality reduces the latency of internal requests generated by our servers. This works well. Except that a vast quantity of the requests seen over the wire are lookups for the same records, generated by applications which don't perform any DNS caching of their own. Questions: Is there a significant benefit to running lightweight caching daemons on the individual servers in order to reduce repeated requests from hitting the network? Does anyone have experience of using [u]nscd, lwresd or dnscache to do such a thing? Are there any other packages worth looking at? Any caveats to beware of? Besides the obvious, caching and negative caching stale results.

    Read the article

  • Looking for a fiber optic "switch" or "router" for home use

    - by Shrout1
    The gist of my question: What is a "fiber optic" switch called? I.E. a layer 2 ethernet switch that uses fiber TX and RX connections and sends layer 2 network traffic between the fiber strands that are connected. Can someone purchase a dedicated fiber switch that does not have copper ethernet ports? What is the current average price of a device like this? Not necessarily looking for product endorsements, just information Might not make sense to go this route if it is too cost prohibitive What type of fiber connector is used for terminating a fiber strand into a jack on the wall? Can fiber be "patched" using two jacks and a "patch" cable? Is signal loss a concern with the longest runs at 100-200ft, a patch cable and media converters? The full story: My parents had unterminated fiber optic cable and terminated Cat5e run throughout their home when it was built in 2004. 10 years later the Cat5e isn't providing the throughput that my father needs to accomplish multiple streams of HD and fast system backups throughout the house. He can't reach gigabit speeds across the distance of the Cat5e runs. We are both interested in terminating the fiber connections and using them as high speed "backbones" to copper switches in each room of the house. It would be easy to attain gigabit speeds (or better, eventually) using the fiber. I have searched and searched for a "fiber optic switch" or "fiber optic router" and cannot find the correct term to describe this piece of hardware. We can use fiber media converters at the end points of each connection, however it would be nice to have a "patch panel" set up in the network closet in the basement that has fiber connections on it and switches the ethernet streams between the connections/systems in the house. Each fiber media converter costs between $50-$100 a piece... After 10 or so terminated connections it might make sense to find a piece of hardware that does not require media converters. That would depend upon the cost of this hardware Somewhat unrelated, if we are able to route between these fiber strands successfully, what is the physical connector type used in a jack on the wall? Just like RJ45 has a wall outlet (depicted below): What is the fiber optic equivalent of this? In the interim could we "patch" a couple fiber strands together in the network closet? Would signal loss be of concern with a run length of 100-200 feet, a patch cable and two media converters? If that would work then it could be used until the funds are available for more.

    Read the article

  • Tips for debugging Samba performance?

    - by j-g-faustus
    Samba gives me 24 MB/s read and 44 MB/s write, while ftp gives 97 and 112 MB/s under the same circumstances. The documentation says that Generally, you should find that Samba performs similarly to ftp at raw transfer speed. In my case it clearly doesn't. Where can I find tips on how to debug Samba performance? Or alternatively tips for replacing Samba with something else? (I can't use ftp, unfortunately, as I need something that can be used with rsync/rsnapshot.) More details: Both computers are running Ubuntu 10.10 (using Samba because I have a Mac as well) The Samba share is on a local home network, mounted as $ mount ... //server.local/share/ on /mnt/share type cifs (rw,mand) Samba performance was tested by copying (cp) a single file of ~4GB to and from the share, using time for timing and calculating transfer speed by hand. ftp performance are the numbers from the ftp client for get/put of the same file. iperf gives network speed ~900 Mbits/s bonnie++ gives disk speeds 200 MB/s on both sides for block reads as well as block writes Tried changing the parameters suggested in the performance tuning HOWTO (read/write raw, read size, socket options), most of them made little to no difference. (The one that made a difference caused write speed to drop 50%.)

    Read the article

  • Are there any home/soho NAS devices that will backup/sync to the cloud?

    - by 3rdparty
    Looking for a home office (SOHO) market (priced) network hard drive (NAS) that will sync some or all of its content to a cloud-based backup service. The only option I've been able to find so far is NetGear's [ReadyNAS Vault][1] however from what I've read it's not as secure as it could be, and the service is quite expensive ($200/yr for 50GB of cloud storage) - it's 'powered' by ElephantDrive Ideally would love to see something like Wuala integrated into a Lacie Network HDD - conveniently, I suspect this is in the works as Lacie recently acquired Wuala, however nothing has come of it yet. I know there are options to use rsync with a customizable NAS (such as the very versatile and hackable D-Link DNS-323, but the easier this is to setup and maintain, the better. Thanks! ps. I had many links posted within this question, but was limited to posting with only one due to anti-spam restrictions - gotta get my 'reputation' higher!

    Read the article

  • Internet wireless connected with limited access, windows vista

    - by r0ca
    I had some malware in my computer so I did a bit of manual work to remove it including resetting TCP/IP. Now the malware is gone. I can see my home wireless network and I can get connected to it but when connected I get the Internet wireless connected with limited access message. When I go to the IE I cannot browse. When I tried to ping 192.168.1.1 I got an Error Code 1231 Unconnected Network Problem. I have deactivated my Windows firewall as I thought it could be hyperactive security. Still no luck. I have Norton but it is not active, I have also Avast and AVG installed but they are not active. Any ideas?

    Read the article

  • Connecting Windows XP to Windows 7 directly using cable

    - by TPR
    These are the problems I am encountering. XP can access Windows 7, not the other way around (which is fine, because I don't need it the other way currently) File transfer is too slow like 0.031 MB/s even though netperf and netCPS list around 8-9 MB/s. I disabled firewall on both computers. Both are same workgroup. I left homegroup on Windows 7. Windows 7 sees the connection as unidentified network. 10.1.1.2 (XP) and 10.1.1.1 (Windows 7) Subnet mask 255.255.255.0 Default gateway and DNS are empty for both of them. Both computer are connected to internet using wireless (using home network), and both of them are connected to each other using wire! If anybody has any pointers, do let me know. I have no problem doing such setup with both computers being Windows 7. This time one of them is XP though, and that seems to be the problem.

    Read the article

  • Case Study: Polystar Improves Telecom Networks Performance with Embedded MySQL

    - by Bertrand Matthelié
    Polystar delivers and supports systems that increase the quality, revenue and customer satisfaction of telecommunication services. Headquarted in Sweden, Polystar helps operators worldwide including Telia, Tele2, Telekom Malysia and T-Mobile to monitor their network performance and improve service levels. Challenges Deliver complete turnkey solutions to customers integrating a database ensuring high performance at scale, while being very easy to use, manage and optimize. Enable the implementation of distributed architectures including one database per server while maintaining a low Total Cost of Ownership (TCO). Avoid growing database complexity as the volume of mobile data to monitor and analyze drastically increases. Solution Evaluation of several databases and selection of MySQL based on its high performance, manageability, and low TCO. The MySQL databases implemented within the Polystar solutions handle on average 3,000 to 5,000 transactions per second. Up to 50 million records are inserted every day in each database. Typical installations include between 50 and 100 MySQL databases, up to 300 for the largest ones. Data is then periodically aggregated, with the original records being overwritten, as the need for detailed information becomes unnecessary to operators after a few weeks. The exponential growth in mobile data traffic driven by the proliferation of smartphones and usage of social media requires ever more powerful solutions to monitor, analyze and turn network data into actionable business intelligence. With MySQL, Polystar can deliver powerful, yet easy to manage, solutions to its customers. MySQL-based Polystar solutions enable operators to monitor, manage and improve the service levels of their telecom networks in over a dozen countries from a single location. The new and innovative MySQL features constantly delivered by Oracle help ensure Polystar that it will be able to meet its customer’s needs as they evolve. “MySQL has been a great embedded database choice for us. It delivers the high performance we need while remaining very easy to use, manage and tune. Power and simplicity at its best.” Mats Söderlindh, COO at Polystar.

    Read the article

  • Find DNS server automatically

    - by jdickson
    I've got a Windows 2012 server set up as a domain controller and DNS server in my basement. On my laptop, if I set it to use the IP address of my server as the DNS server, then it works as expected. The problem with that is that I use my laptop outside my home network and I need to switch it back to automatic. Setup is like this: ISP Router running DD-WRT Win 2012 DC/DNS and other network computers How can I have my laptop find the DNS server automatically instead of using my ISP's DNS servers?

    Read the article

  • eSTEP Newsletter November 2012

    - by uwes
    Dear Partners,We would like to inform you that the November '12 issue of our Newsletter is now available.The issue contains information to the following topics: News from CorpOracle Celebrates 25 Years of SPARC Innovation; IDC White Papers Finds Growing Customer Comfort with Oracle Solaris Operating System; Oracle Buys Instantis; Pillar Axiom OpenWorld Highlights; Announcement Oracle Solaris 11.1 Availability (data sheet, new features, FAQ's, corporate pages, internal blog, download links, Oracle shop); Announcing StorageTek VSM 6; Announcement Oracle Solaris Cluster 4.1 Availability (new features, FAQ's, cluster corp page, download site, shop for media); Announcement: Oracle Database Appliance 2.4 patch update becomes available Technical SectionOracle White papers on SPARC SuperCluster; Understanding Parallel Execution; With LTFS, Tape is Gaining Storage Ground with additional link to How to Create Oracle Solaris 11 Zones with Oracle Enterprise Manager Ops Center; Provisioning Capabilities of Oracle Enterprise Ops Center Manager 12c; Maximizing your SPARC T4 Oracle Solaris Application Performance with the following articles: SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark, SPARC T4-Based Highly Scalable Solutions Posts New World Record on SPECjEnterprise2010 Benchmark, SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g; Oracle SUN ZFS Storage Appliance Reference Architecture for VMware vSphere4;  Why 4K? - George Wilson's ZFS Day Talk; Pillar Axiom 600 with connected subjects: Oracle Introduces Pillar Axiom Release 5 Storage System Software, Driving down the high cost of Storage, This Provisioning with Pilar Axiom 600, Pillar Axiom 600- System overview and architecture; Migrate to Oracle;s SPARC Systems; Top 5 Reasons to Migrate to Oracle's SPARC Systems Learning & EventsRecently delivered Techcasts: Learning Paths; Oracle Database 11g: Database Administration (New) - Learning Path; Webcast: Drill Down on Disaster Recovery; What are Oracle Users Doing to Improve Availability and Disaster Recovery; SAP NetWeaver and Oracle Exadata Database Machine ReferencesARTstor Selects Oracle’s Sun ZFS Storage 7420 Appliances To Support Rapidly Growing Digital Image Library, Scottish Widows Cuts Sales Administration 20%, Reduces Time to Prepare Reports by 75%, and Achieves Return on Investment in First Year, Oracle's CRM Cloud Service Powers Innovation: Applications on Demand; Technology on Demand, How toHow to Migrate Your Data to Oracle Solaris 11 Using Shadow Migration; Using svcbundle to Create SMF Manifests and Profiles in Oracle Solaris 11; How to prepare a Sun ZFS Storage Appliance to Serve as a Storage Devise with Oracle Enterprise Manager Ops Center 12c; Command Summary: Basic Operations with the Image Packaging System In Oracle Solaris 11; How to Update to Oracle Solaris 11.1 Using the Image Packaging System, How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11;  Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster; Ease the Chaos with Automated Patching: Oracle Enterprise Manager Cloud Control 12c; Book excerpt: Oracle Exalogic Elastic Cloud Handbook You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • How do you search for backdoors from the previous IT person?

    - by Jason Berg
    We all know it happens. A bitter old IT guy leaves a backdoor into the system and network in order to have fun with the new guys and show the company how bad things are without him. I've never personally experienced this. The most I've experienced is somebody who broke and stole stuff right before leaving. I'm sure this happens, though. So, when taking over a network that can't quite be trusted, what steps should be taken to ensure everything is safe and secure?

    Read the article

  • getting input/output error from NFS client on RHEL5

    - by Andrew Watson
    i have two RHEL5 boxes on a private network together (192.168.2.0/24) and I am trying to export a file system from one to the other but I keep getting the following error: mount.nfs: Input/output error on the client side I see this output: mount: trying 192.168.2.101 prog 100003 vers 3 prot tcp port 2049 mount: trying 192.168.2.101 prog 100005 vers 3 prot tcp port 960 and on the server side I see this: Sep 20 14:14:32 omicron mountd[18739]: authenticated mount request from 192.168.2.87:635 for /srv/nfs/web (/srv/nfs/web) but that's all. I opened up iptables so that the whole 192.168.2.0/24 network is allowed to communicate freely but the public side is locked down to 22,80 etc.... any ideas?

    Read the article

  • 256 Worker Role 3D Rendering Demo is now a Lab on my Azure Course

    - by Alan Smith
    Ever since I came up with the crazy idea of creating an Azure application that would spin up 256 worker roles (please vote if you like it ) to render a 3D animation created using the Kinect depth camera I have been trying to think of something useful to do with it. I have also been busy working on developing training materials for a Windows Azure course that I will be delivering through a training partner in Stockholm, and for customers wanting to learn Windows Azure. I hit on the idea of combining the render demo and a course lab and creating a lab where the students would create and deploy their own mini render farms, which would participate in a single render job, consisting of 2,000 frames. The architecture of the solution is shown below. As students would be creating and deploying their own applications, I thought it would be fun to introduce some competitiveness into the lab. In the 256 worker role demo I capture the rendering statistics for each role, so it was fairly simple to include the students name in these statistics. This allowed the process monitor application to capture the number of frames each student had rendered and display a high-score table. When I demoed the application I deployed one instance that started rendering a frame every few minutes, and the challenge for the students was to deploy and scale their applications, and then overtake my single role instance by the end of the lab time. I had the process monitor running on the projector during the lab so the class could see the progress of their deployments, and how they were performing against my implementation and their classmates. When I tested the lab for the first time in Oslo last week it was a great success, the students were keen to be the first to build and deploy their solution and then watch the frames appear. As the students mostly had MSDN suspicions they were able to scale to the full 20 worker role instances and before long we had over 100 worker roles working on the animation. There were, however, a few issues who the couple of issues caused by the competitive nature of the lab. The first student to scale the application to 20 instances would render the most frames and win; there was no way for others to catch up. Also, as they were competing against each other, there was no incentive to help others on the course get their application up and running. I have now re-written the lab to divide the student into teams that will compete to render the most frames. This means that if one developer on the team can deploy and scale quickly, the other team still has a chance to catch up. It also means that if a student finishes quickly and puts their team in the lead they will have an incentive to help the other developers on their team get up and running. As I was using “Sharks with Lasers” for a lot of my demos, and reserved the sharkswithfreakinlasers namespaces for some of the Azure services (well somebody had to do it), the students came up with some creative alternatives, like “Camels with Cannons” and “Honey Badgers with Homing Missiles”. That gave me the idea for the teams having to choose a creative name involving animals and weapons. The team rendering architecture diagram is shown below.   Render Challenge Rules In order to ensure fair play a number of rules are imposed on the lab. ·         The class will be divided into teams, each team choses a name. ·         The team name must consist of a ferocious animal combined with a hazardous weapon. ·         Teams can allocate as many worker roles as they can muster to the render job. ·         Frame processing statistics and rendered frames will be vigilantly monitored; any cheating, tampering, and other foul play will result in penalties. The screenshot below shows an example of the team render farm in action, Badgers with Bombs have taken a lead over Camels with Cannons, and both are  leaving the Sharks with Lasers standing. If you are interested in attending a scheduled delivery of my Windows Azure or Windows Azure Service bus courses, or would like on-site training, more details are here.

    Read the article

  • VPN Connects with local access only

    - by user20102
    I have Windows Server 2008 and I have set up a VPN. When a user logs into their VPN they can not view the internet. It comes up as local access only on the the client PC. On the server when I go to the Network And Sharing Centre it displays (Network) with local and internet below it has the RAS Dial-in interface and it displays as local only there as well. I want all users that connect to my VPN to have internet access (full access). If anyone can help me it would be appreciated. Also I've activated dial-in properties and I've done it via policy who can access the VPN and internet. It still doesn't work, but there is a connection but just local only. Thanks

    Read the article

  • Is there a free-embedded SSH solution ?

    - by ereOn
    Hi, I'm working for an important company which has some severe network policies. I'd like to connect from my work, to my home linux server (mainly because it allows me to monitor my home-automated installation, but that's off-topic) but of course, any ssh connection (tcp port 22) to an external site is blocked. While I understand why this is done (to avoid ssh tunnels I guess), I really need to have some access to my box. (Well, "need" might be exagerated, but that would be nice ;) Do you know any web-based solution that I could install on my home linux server that would give me some pseudo-terminal (served using https) embedded in a web page ? I'm not necessarily looking for something graphical: a simple web-embedded ssh console would do the trick. Or do you guys see any other solution that wouldn't compromise network security ? Thank you very much for your solutions/advices.

    Read the article

  • Connect to MS Sql 2008 on local VM

    - by Campo
    I have a test machine. Server 2008 with Hyper V MSSQL 2008 Enterprise Lets call it MACHINE A on the VM it is as well Server 2008 with another MSSQL 2008 Ent Call it VM B I setup a DB on MACHINE A then backed it up and restored following the prepare database for mirroring instructions on MSDN onto VM B. I used to be able to connect to the VM B Instance from the main test server (MACHINE A) but now I cannot for some reason. It cannot seem to find the instance at all even when I browse network databases. I can ping the VM from any computer on the network and access its shares so I know it is discoverable. Just the end of a long day maybe I am missing something here.

    Read the article

  • Windows server 2008 R2 IIS7 file permissions

    - by StealthRT
    Hey all i am trying to figure out why i can not access a index.php file from within the wwwroot/mollify/backend directory. It keeps coming up with this: Server Error 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I've given all the permissions (Full control) to the wwwroot directory i could think of (IUSR, Guest, GUESTS, IIS_IUSRS, Users, Administrators, NETWORK, NETWORK SERVICE, SYSTEM, CREATOR OWNER & Everyone). I also added index.php to the "Default Document" under my website settings in IIS 7 manager. What else am i missing? Thanks! David

    Read the article

  • networking tunnel adapter connections?

    - by Karthik Balaguru
    I understand that Tunnel Adapter LAN is for encapsulating IPv6 packets with an IPv4 header so that they can be sent across an IPv4 network. Few queries popped up in my mind based on this :- If i do 'ipconfig', Apart from ethernet adapter LAN details, I get a series of statments as below - Tunnel adapter Local Area Connection* 6 Tunnel adapter Local Area Connection* 7 Tunnel adapter Local Area Connection* 12 Tunnel adapter Local Area Connection* 13 Tunnel adapter Local Area Connection* 14 Tunnel adapter Local Area Connection* 15 Tunnel adapter Local Area Connection* 16 Except for the *16, all the other Tunnel Adapter Local Area Connections show Media Disconnected. Why is the numbering for the Tunnel adapter LAN not sequential? It is like 6, 7, 12, 13, 14, 15, 16. A strange numbering scheme! I tried to figure it out by thinking of some arithmetic series. But, it does not seem to fit in. There is a huge gap between 7 and 12. Any ideas? What is the need for so many Tunnel Adapter LAN connections? Can you tell me a scenario that requires all of those ? I did ipconfig /all to get more information. From the listing, I understand that: 16, 15, 14, 12 are Microsoft 6to4 Adapters 13, 6 are isatap Adapters 7 is Teredo Tunneling Pseudo-interface I understand that the above are for automatic tunneling so that the tunnel endpoints are determined automatically by the routing infrastructure. 6to4 is recommended by RFC3056 for automatic tunneling that uses protocol 41 for encapsulation. It is typically used when an end-user wants to connect to the IPv6 Internet using their existing IPv4 connection. Teredo is an automatic tunneling technique that uses UDP encapsulation across multiple NATs. That is, It is to grant IPv6 connectivity to nodes that are located behind IPv6-unaware NAT devices ISATAP treats the IPv4 network as a virtual IPv6 local link, with mappings from each IPv4 address to a link-local IPv6 address. That is to transmit IPv6 packets between dual-stack nodes on top of an IPv4 network. That is, to put in simple words, ISATAP is an intra-site mechanism, while the 6to4 and Teredo are for inter-site tunnelling mechanisms. It seems that Teredo should alone enabled by default in Vista, But my system does not show it to be enabled by default. Interestingly, it shows a 6to4 tunnel adapter (Tunnel adapter LAN connection 16) to be enabled by default? Any specific reasons for it? If i do ipconfig /all, why is only one Teredo present while four 6to4 are present ? I searched the internet for answers to the above queries, but I am unable to find clear answers.

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >