Search Results

Search found 2962 results on 119 pages for 'cisco vpn'.

Page 109/119 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • RAD - Websphere Application server won't start

    - by Caroline
    I am having issues with RAD7. My server will not start for me today. I am working from home and connected to the VPN. Everything works except my server in RAD. It worked fine yesterday in work and had previously worked when I was at home but that was a few weeks ago. Are there any settings that I should look out for? I have disabled my proxy settings in RAD and turned off everything in my firewall. I can ping all the DBs that the server is connecting to. I have even removed all the projects from the server and it will still not start. It keeps trying and then times out after 300s. Any suggestions?

    Read the article

  • User.Identity.Name returning NT AUTHORITY\NETWORK SERVICE i want Domain\USER

    - by Jalvemo
    in my asp.net MVC project i have an database connection with connectionstring: Data Source=.;Initial Catalog=dbname;Integrated Security=True All users can execute Stored Procedures on that connection and i want to log those users. so after each execution I store "User.Identity.Name" to another database. This work great on my development machine but after deployment, to access the site i have to go through a VPN-connention and then remote desktop to the same server that the IIS is running on and use a web-browser there. Then i get User.Identity.Name: "NT AUTHORITY\NETWORK SERVICE". i would expect it to be the credentials i entered in remote desktop that have access to the database. any idea how i can get this to work? iis6 authentication: "windows authentication: enabled" web.config:

    Read the article

  • Looking for a source code management system with a good GUI client

    - by Anders Öhrt
    We are currently using CS-RCS Pro for source code management, and are looking for to replace this due to performance issues. It is based on client side file access with no own protocol, which makes it painfully slow to use over a slow VPN line since it always rewrites the whole history of a file. It does however have a GUI client which is very simple and gives a great overview. We have three main requirements in a SCM: Fast. It must have a server side service or some other smart way so working with files with a large history is fast. A good Windows GUI client (not Explorer shell integration, not VS or Eclipse IDE integration), so working with files and branches is easy. The possibility to have several branches checked out at once in different directories. Does anyone have a recommendation of a SCM which fulfills there requirements?

    Read the article

  • Suggestions for transitioning to new GW/private network

    - by Quinten
    I am replacing a private T1 link with a new firewall device with an ipsec tunnel for a branch office. I am trying to figure out the right way to transition folks at the new site over to the new connection, so that they default to using the much faster tunnel. Existing network: 192.168.254.0/24, gw 192.168.254.253 (Cisco router plugged in to private t1) Test network I have been using with ipsec tunnel: 192.168.1.0/24, gw 192.168.1.1 (pfsense fw plugged in to public internet), also plugged in to same switch as the old network. There are probably ~20-30 network devices in the existing subnet, about 5 with static IPs. The remote endpoint is already the firewall--I can't set up redundant links to the existing subnet. In other words, as soon as I change the tunnel configuration to point to 192.168.254.0/24, all devices in the existing subnet will stop working because they point to the wrong gateway. I'd like some ability to do this slowly--such that I can move over a few clients and verify the stability of the new link before moving critical services or less tolerant users over. What's the right way to do this? Change the netmask on all of the devices to /16, and update gateway to point to the new device? Could this cause any problems? Also, how should I handle DNS? The pfsense box is not aware of my Active Directory environment. But if I change DNS to use the local servers, it will result in a huge slowdown as DNS queries will still be routed over the private t1. I need some help coming up with a plan that's not too disruptive but will really let me thoroughly test the stability of the IPSEC tunnel before I make the final switch. The AD version is 2008R2, as are the servers. Workstations are mostly Windows XP SP3. I have not configured the 192.168.1.0/24 as a site in AD sites and services.

    Read the article

  • Advantage Database Replication

    - by Jon
    I have a client that wants two sites to have the ability to sync databases so information at Site A can be synced with Site B so the two sites can look at the same data. I'm not even sure of the infrastructure required. Would a VPN required to connect the 2 databases or would an internet based database work ie/Site A to InternetDatabase and Site B to InternetDatabase. Each site copies data to it periodically and then the InternetDatabase syncs it and the Sites can then pull data down. My other thought was something like Dropbox. If Site A and Site B use a Dropbox account to sync the ADT files etc can the database at each site then sync with those ADT files? Thanks

    Read the article

  • Forward a call to a webservice to another webservice?

    - by Luhmann
    If I have an https webservice behind a firewall on a machine (A) that I cannot access, but access to a machine on the same network (B), from where I can call the webservice on machine A. What is the best way of talking with the webservice on machine A, from the outside via machine B (that I access via VPN)? I can obviously create a service with a matching interface on machine B, and call the methods on the webservice on machine A, and return the result. But I fear for the overhead. Is there another way? Can i somehow forward the request?

    Read the article

  • Getting local My Documents folder path

    - by smsrecv
    In my C++/WinAPI application I get the My Documents folder path using this code: wchar_t path[MAX_PATH]; SHGetFolderPathW(NULL,CSIDL_PERSONAL,NULL,SHGFP_TYPE_CURRENT,path); One of the users runs my program on a pc connected to his corporate network. He has the My Documents folder on a network. So my code returns something like \paq\user.name$\My Documents Though he says he has a local copy of My Documents. The problem is that when he 'swaps VPN', the online My Documents becomes unavailable and my program crashes with the system error code 64 "The specified network name is no longer available" ( it tries to write to the file opened in the online my docs folder). How can I always get the local My Documents folder path using C++/WinAPI?

    Read the article

  • can ping but cannot browse.Can you help

    - by user231465
    Hi all I have winxp with the latest service pack on my old laptop. I have a situation where I can Ping and status says connected but cannot browse. I have seen various post on the net proposing solutions that works for some and not for others I have tried reseting winsock but nothing. If I vpn to my work then I can browse. I would like to avoid reformatting the computer and not take such a drastic action. Any suggestions?

    Read the article

  • How can I modify command line in Ubuntu 10.04?

    - by user482594
    I am using a vpn service from certain server. I was given with a root account, and when I connect with a root account, the command line looks like below. root@xa9g82:/etc/# Then I used useradd to add an account called 'temp' When I connected to the server with temp, then the command line only has a single character. $ The user information is not shown, neither the path. Also, note that, in root's command line I can use tab to automatically complete the filename, however 'temp's command line inserts tab space, when I press tab. It is very inconvenient. I am using Ubuntu 10.04. How can I resolve this issue?

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Windows Azure Use Case: Fast Acquisitions

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many organizations absorb, take over or merge with other organizations. In these cases, one of the most difficult parts of the process is the merging or changing of the IT systems that the employees use to do their work, process payments, and even get paid. Normally this means that the two companies have disparate systems, and several approaches can be used to have the two organizations use technology between them. An organization may choose to retain both systems, and manage them separately. The advantage here is speed, and keeping the profit/loss sheets separate. Another choice is to slowly “sunset” or stop using one organization’s system, and cutting to the other system immediately or at a later date. Although a popular choice, one of the most difficult methods is to extract data and processes from one system and import it into the other. Employees at the transitioning system have to be trained on the new one, the data must be examined and cleansed, and there is inevitable disruption when this happens. Still another option is to integrate the systems. This may prove to be as much work as a transitional strategy, but may have less impact on the users or the balance sheet. Implementation: A distributed computing paradigm can be a good strategic solution to most of these strategies. Retaining both systems is made more simple by allowing the users at the second organization immediate access to the new system, because security accounts can be created quickly inside an application. There is no need to set up a VPN or any other connections than just to the Internet. Having the users stop using one system and start with the other is also simple in Windows Azure for the same reason. Extracting data to Azure holds the same limitations as an on-premise system, and may even be more problematic because of the large data transfers that might be required. In a distributed environment, you pay for the data transfer, so a mixed migration strategy is not recommended. However, if the data is slowly migrated over time with a defined cutover, this can be an effective strategy. If done properly, an integration strategy works very well for a distributed computing environment like Windows Azure. If the Azure code is architected as a series of services, then endpoints can expose the service into and out of not only the Azure platform, but internally as well. This is a form of the Hybrid Application use-case documented here. References: Designing for Cloud Optimized Architecture: http://blogs.msdn.com/b/dachou/archive/2011/01/23/designing-for-cloud-optimized-architecture.aspx 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0

    Read the article

  • Principles of an extensible data proxy

    - by Wesley
    There is a growing industry now with more than 30 companies playing in the Backend-As-A-Service (BaaS) market. The principle is simple: give companies a secure way of exposing data housed on premises and behind the firewall publicly. This can include database data, as well as Legacy PC data through established connectors; SAP for example provides a connector for transacting with their legacy systems. Early attempts were fixed providers for specific systems like SAP, IBM or Oracle, but the new breed is extensible, allowing Channel Partners and Consultants to build robust integration applications that can consume whatever data sources the client wants to expose. I just happen to be close to finishing a Cloud Based HTML5 application platform that provides robust integration services, and I would like to break ground on an extensible data proxy to complete the system. From what I can gather, I need to provide either an installable web service of some kind, or a Cloud service which the client can configure with VPN for interactions. Then I can build in connectors, which can be activated with a service account, and expose those transactions via web services of some kind (JSON, SOAP, etc). I can also provide a framework that allows people to build in their own connectors, and use some kind of schema to hook those connectors into the proxy. The end result is some kind of public facing web service that could securely be consumed by applications to show data through HTML5 on any device. My gut is, this isn't as hard as it sounds. Almost all of the 30+ companies (With more popping up almost weekly) have all come into existence in the last 18 months or so, which tells me either the root technology, or the skillset to create the technology is in abundance right now. Where should I start on this? Are there some open source projects I can leverage? A specific group of developers I can hire? I'm confident someone here can set me on the right path and save me some time. You don't see this many companies spring up this rapidly if they are all starting from scratch with proprietary technology. The Register: WTF is BaaS One Minute Video from Kony on their BaaS

    Read the article

  • EPM 11.1.2.2.000 release - considerations

    - by THE
    (guest Article by Nancy) Please be aware with the upcoming release of EPM v11.1.2.2.000, it is highly recommended you first read the"ORACLE® ENTERPRISE PERFORMANCE MANAGEMENT SYSTEM 11.1.2.2.000 Readme" prior to installing this release. We want to highlight the "Installation Information" section which includes the following late-breaking information: Business Rules Migration to Calculation Manager Oracle Hyperion Calculation Manager has replaced Oracle Hyperion Business Rules as the mechanism for designing and managing business rules, therefore, Business Rules is no longer released with EPM System Release 11.1.2.2. If you are applying 11.1.2.2 as a maintenance release, or upgrading to Release 11.1.2.2, and have been using Business Rules in an earlier release, you must migrate to Calculation Manager rules in Release 11.1.2.2. (See Oracle Enterprise Performance Management System Installation and Configuration Guide.) Planning User Interface Enhancements This release of Planning includes a large number of user interface enhancements, as described in Oracle Hyperion Planning New Features. To optimize performance with these new features, you must implement the following recommended configuration. Server: 64-bit, 16 GB physical RAM Client: Optimized for Internet Explorer 9 and Firefox 10 or higher Client-to-Server Connectivity: High-speed internet connection or VPN connection between client and server, client-to-server ping time < 150 milliseconds for best performance The new, improved Planning user interface requires efficient browsers to handle interactivity provided through Web 2.0 like functionality. In our testing, Internet Explorer 7, Internet Explorer 8, and Firefox 3.x are not sufficient to handle such interactivity, and the responsiveness in these versions of browsers is not as fast as the user interface in the previous releases of Planning. For this reason, we strongly recommend that you upgrade your browser to Internet Explorer 9 or Firefox 10 to get responsiveness similar to what you experienced in previous releases. In some instances, the response times in Internet Explorer 7, Internet Explorer 8 and Firefox 3.x could be acceptable. Hence, we suggest that you uptake the new user interface only after you conduct an end user response test and you are satisfied with the results of these tests for these versions of browsers. Please note that it is still possible to leverage the old user interface and features from Planning Release 11.1.2.1. (For more information, see “Using the Planning Release 11.1.2.1 User Interface and Features” in the Oracle Hyperion Planning Administrator's Guide.) IBM HTTP Server and IIS Default Ports Both IBM HTTP Server and IIS Web Server use 80 as their default port. If you are using WebSphere, you must change one of these defaults so that there is no port conflict. If you have further questions, please utilize the  Planning or Essbase MOS Community.

    Read the article

  • determine if udp socket can be accessed via external client

    - by JohnMerlino
    I don't have access to company firewall server. but supposedly the port 1720 is open on my one ubuntu server. So I want to test it with netcat: sudo nc -ul 1720 The port is listening on the machine ITSELF: sudo netstat -tulpn | grep nc udp 0 0 0.0.0.0:1720 0.0.0.0:* 29477/nc The port is open and in use on the machine ITSELF: lsof -i -n -P | grep 1720 gateway 980 myuser 8u IPv4 187284576 0t0 UDP *:1720 Checked the firewall on current server: sudo ufw allow 1720/udp Skipping adding existing rule Skipping adding existing rule (v6) sudo ufw status verbose | grep 1720 1720/udp ALLOW IN Anywhere 1720/udp ALLOW IN Anywhere (v6) But I try echoing data to it from another computer (I replaced the x's with the real integers): echo "Some data to send" | nc xx.xxx.xx.xxx 1720 But it didn't write anything. So then I try with telnet from the other computer as well: telnet xx.xxx.xx.xxx 1720 Trying xx.xxx.xx.xxx... telnet: connect to address xx.xxx.xx.xxx: Operation timed out telnet: Unable to connect to remote host Although I don't think telnet works with udp sockets. I ran nmap from another computer within the same local network and this is what I got: sudo nmap -v -A -sU -p 1720 xx.xxx.xx.xx Starting Nmap 5.21 ( http://nmap.org ) at 2013-10-31 15:41 EDT NSE: Loaded 36 scripts for scanning. Initiating Ping Scan at 15:41 Scanning xx.xxx.xx.xx [4 ports] Completed Ping Scan at 15:41, 0.10s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 15:41 Completed Parallel DNS resolution of 1 host. at 15:41, 0.00s elapsed Initiating UDP Scan at 15:41 Scanning xtremek.com (xx.xxx.xx.xx) [1 port] Completed UDP Scan at 15:41, 0.07s elapsed (1 total ports) Initiating Service scan at 15:41 Initiating OS detection (try #1) against xtremek.com (xx.xxx.xx.xx) Retrying OS detection (try #2) against xtremek.com (xx.xxx.xx.xx) Initiating Traceroute at 15:41 Completed Traceroute at 15:41, 0.01s elapsed NSE: Script scanning xx.xxx.xx.xx. NSE: Script Scanning completed. Nmap scan report for xtremek.com (xx.xxx.xx.xx) Host is up (0.00013s latency). PORT STATE SERVICE VERSION 1720/udp closed unknown Too many fingerprints match this host to give specific OS details Network Distance: 1 hop TRACEROUTE (using port 1720/udp) HOP RTT ADDRESS 1 0.13 ms xtremek.com (xx.xxx.xx.xx) Read data files from: /usr/share/nmap OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 2.04 seconds Raw packets sent: 27 (2128B) | Rcvd: 24 (2248B). The only thing I can think of is a firewall or vpn issue. Is there anything else I can check for before requesting that they look at the firewall server again?

    Read the article

  • RTS Game Style Application [closed]

    - by Daniel Wynand van Wyk
    My question may seem somewhat odd, but I hope that my specifications will clarify EXACTLY what it is that I am after. I need some help choosing the right tooling for a particular endeavour. My background is in desktop application development and large back-end systems. I have worked primarily on the Microsoft stack using C# and the .Net framework. My goal is to develop a 2D, RTS style, interactive office simulation. The simulation will model various office spaces, office equipment, employees and their interactions with one another. The idea is to abstract the concept of an office completely. Under the hood the application will do many things that are nothing like a game. This includes P2P networking, VPN tunnelling, streaming video, instant messaging, document collaboration, remote screen sharing, file-sharing, virus scanning, VOIP, document scanning, faxing, emailing, distributed computing, content management and much more! A somewhat similar thing has been attempted by IBM, where they created a virtual office in second life. If their attempt was a game, the game-play would be notably horrible, to say the least! The users/players will drive and control my application through the various objects modelled in the simulation. A single application capable of performing all of these various tasks would be a nightmare to navigate for even the most expert user. Using the concept of a game, I can easily separate functionality by assigning them to objects that relate 1-1 with their real world counter-parts. This can greatly simplify computing for novice users, with many added benefits in terms of visibility, transparency of process and centralized configuration. My hope is to make complex computing tasks accessible to all kinds of users and to greatly reduce the cognitive load associated with using the many different utilities and applications inside office settings. The complexity is therefore limited to the complexity of the space in which you find yourself. I want the application to target as many platforms as possible and run on computers that have no accelerated graphics capabilities. The simulation won't contain any of the fancy eye-candy you find in modern games, to the contrary, my "game" will purposefully be clean and simple. The closest thing I could imagine would be an old game like "Theme Hospital" or the first instalment of "The Sims". All the content will be pre-created and not user-generated like Second Life. New functionality will be added via a plugin system. Given my background and nature of my "game", I would like to spend most of my time writing code that does not have to do with the simulated office, as the "game" is really just a glorified application menu. I have done much reading about existing engines, frameworks and tools. I need the help of an experienced game developer who has tried and tested various products over the years who can guide me in the right direction given my very particular needs. I would appreciate any help I can get!

    Read the article

  • Oracle Application in DMZ (Demilitarized Zone)

    - by PRajkumar
     Business Needs Large Organizations want to expose their Oracle Application services outside their private network (HTTP/HTTPS and SSL). Usually these exposures must exist to promote external communication. So they want to separate an external network from directly referencing an internal network   Business Challenges ·         Business does not want to compromise with security information ·         Business cannot expose internal domain or internal URL information   Business Solution DMZ is the solution of this problem. In Oracle application we can achieve this by following way –   ·         Oracle Application consists of fleet nodes (FND_NODES) so first decide which node have to expose to public ·         To expose the node to public use the profile “Node Trust Level” ·         Set node to Public/Private (Normal -> private, External -> public) ·         Set "Responsibility Trust Level" profile to decide whether to expose Application Responsibility to inside or outside firewall         Solution Features   ·         Exposed web services can be accessed by both internal and external users ·         Configurable and can be very easily rolled out ·         Internal network and business data is secured from outside traffic ·         Unauthorized access to internal network from outside is prohibited ·         No need for VPN and Secure FTP server   Benefits  ·       Large Organizations having Oracle Application can expose their web services like (HTTP/HTTPS and SSL) to the internet without compromise with security information and without exposing their internal domain   Possible Week Points  ·         If external firewall is compromised, then external application server is also compromised, exposing an attack on E-Business Suite database ·         There’s nothing to prevent internal users from attacking internal application server, also exposing an attack on E-Business Suite database   Reference Links  ·         https://blogs.oracle.com/manojmadhusoodanan/tags/dmz

    Read the article

  • Can't access some websites using Ubuntu 13.10

    - by Adame Doe
    Something's wrong with Ubuntu. Since I've upgraded to 13.10, I can't access some websites for no apparent reason. I've tried everything imaginable to solve this problem : Made sure that MTUs are the same, Disabled IPv6 in both the network manager and used browsers, Deactivated my network keys, DMZed my computer, Used other DNS like Google and OpenDNS, Checked that no firewall was running my computer ... And it's the same result. I even tried to reinstall Ubuntu a couple of times, but no luck. The most annoying thing about it is I can't access wordpress.org! So, there's no way it could be an ISP restriction of some kind. When I use a VPN, I can access pretty much anything. I'm really frustrated because I have to use wordpress.org very often. Any clue? ifconfig adame@adame-ws:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:26:18:3d:b0:7c inet addr:10.42.0.1 Bcast:10.42.0.255 Mask:255.255.255.0 inet6 addr: fe80::226:18ff:fe3d:b07c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8024 errors:0 dropped:0 overruns:0 frame:0 TX packets:7966 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:684480 (684.4 KB) TX bytes:616608 (616.6 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:8222 errors:0 dropped:0 overruns:0 frame:0 TX packets:8222 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:568269 (568.2 KB) TX bytes:568269 (568.2 KB) wlan0 Link encap:Ethernet HWaddr 00:19:70:40:85:eb inet addr:192.168.2.3 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::219:70ff:fe40:85eb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1464 Metric:1 RX packets:123705 errors:0 dropped:0 overruns:0 frame:0 TX packets:98141 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:94963545 (94.9 MB) TX bytes:10387470 (10.3 MB) /etc/hosts 127.0.0.1 localhost 127.0.1.1 adame-ws ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters tracepath wordpress.org 1: adame-ws.local 0.092ms pmtu 1500 1: 192.168.2.1 1.300ms asymm 2 1: 192.168.2.1 1.060ms asymm 2 2: no reply 3: no reply 4: no reply 5: no reply 6: no reply 7: no reply 8: no reply ... keep on going like that ping wordpress.org adame@adame-ws:~$ ping wordpress.org PING wordpress.org (66.155.40.250) 56(84) bytes of data. --- wordpress.org ping statistics --- 10 packets transmitted, 0 received, 100% packet loss, time 9071ms

    Read the article

  • Watchguard SSLVPN user connection issue

    - by Tory Newnham
    I have a user that needs access to our SSLVPN on our Watchguard firewall from his company issued laptop. The problem is when he tries to connect as himself he cannot connect. If I login to the machine it works fine, if I add him to the domain admins group in Active Directory it works fine… So, we know it is an access issue but I cannot figure out what access he needs. He is in the SSLVPN-Users group which I thought would give them all the access they needed but apparently not… Here is the output of the SSLVPN Logs when trying to connect: 2012-09-14T15:40:55.834 Launching WatchGuard Mobile VPN with SSL client. Version 11.5.3 (Build 339447) Built:Apr 5 2012 00:25:00 2012-09-14T15:41:18.832 Requesting client configuration from X.X.X.X:443 2012-09-14T15:41:20.386 VERSION file is 5.15, client version is 5.15 2012-09-14T15:41:21.924 Error: connect() failed. ret = -1 errno=10061 (...) 2012-09-14T15:41:23.960 Error: connect() failed. ret = -1 errno=10061 2012-09-14T15:42:00.788 Failed Launch Has anyone had the same issue, or have any ideas on what Group Policy changes need to be made in order for him to have access but not be a Domain Admin? Thanks in Advance!

    Read the article

  • So No TECH job so far.

    - by Ratman21
    O I found some temp work for the US Census and I have managed to keep the house (so far) but, it looks like I/we are going to have to do a short sale and the temp job will be ending soon.   On top of that it looks like the unemployment fund for me is drying up. I will have about one month left after the Census job is done. I am now down to Appling for work at the KFC.   This is type a work I started with, before I was a tech geek and really I didn’t think I would be doing this kind of work in my later years but, I have a wife and kid. So I got to suck it up and do it.   Oh and here is my new resume…go ahead I know you want to tare it up. I really don’t care any more.   Scott L. Newman 45219 Dutton Way, Callahan, FL32011 H: (904)879-4880 C: (352)356-0945 E: [email protected] Web:  http://beingscottnewman.webs.com/                                                       ______                                                                                 OBJECTIVE To obtain a Network or Technical support position     KEYWORD SUMMARY CompTIA A+, Network+, and Security+ Certified., Network Operation, Technical Support, Client/Vendor Relations, Networking/Administration, Cisco Routers/Switches, Helpdesk, Microsoft Office Suite, Website Design/Dev./Management, Frame Relay, ISDN, Windows NT/98/XP, Visio, Inventory Management, CICS, Programming, COBOL IV, Assembler, RPG   QUALIFICATIONS SUMMARY Twenty years’ experience in computer operations, technical support, and technical writing. Also have two and half years’ experience in internet / intranet operations.   PROFESSIONAL EXPERIENCE October 2009 – Present*   Volunteer Web site and PC technician – Part time       True Faith Christian Fellowship Church – Callahan, FL, Project: Create and maintain web site for Church to give it a worldwide exposure Aug 2008 – September 2009:* Volunteer Church sound and video technician – Part time      Thomas Creek Baptist Church – Callahan, FL   *Note Jobs were for the learning and/or keeping updated on skills, while looking for a tech job and training for new skills.   February 2005 to October 2008: Client Server Dev/Analyst I, Fidelity National Information Services, Jacksonville, FL (FNIS acquired Certegy in 2005 and out of 20 personal, was one of three kept on.) August 2003 to February 2005: Senior NetOps Operator, Certegy, St.Pete, Fl. (August 2003, Certegy terminated contract with EDS and out of 40 personal, was one of six kept on.) Projects: Creation and update of listing and placement for all raised floor equipment at St.Pete site. Listing was made up of, floor plan of the raised floor and equipment racks diagrams showing the placement of all devices using Visio. This was cross-referenced with an inventory excel document showing what dept was responsible for each device. Sole creator of Network operation and Server Operation procedures guide (NetOps Guide).  Expertise: Resolving circuit and/or router issues or assist circuit carrier in resolving issue from the company Network Operation Center (NOC). As well as resolving application problems or assist application support in resolution of it.     July 1999 to August 2003: Senior NetOps Operator,EDS (Certegy Account), St.Pete, FL Same expertise and on going projects as listed above for FNIS/Certegy. (Equifax outsourced the NetOps dept. to EDS in 1999)         January 1991 to July 1999: NetOps/Tandem Operator, Equifax, St.Pete & Tampa, FL Same as all of the above for FNIS/Certegy/EDS except for circuit and router issues   EDUCATION ? New Horizons Computer Learning Center, Jacksonville, Florida - CompTIA A+, Security+, and     Network+ Certified.                        Currently working on CCNA Certification 07/30/10 ? Mott Community College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese

    Read the article

  • Writing a script for ash?

    - by rumtscho
    My VPN is behaving funny sometimes, and I have to restart it often. I wanted to write a script which does that for me. It doesn't have to be anything fancy, just a shortcut for the commands I have to type into the terminal. More specifically: it will look at the running processes. If it finds a running vpnc process, it will kill it. Then it will start vpnc. I've written bash scripts of similar complexity, but now I don't have a bash, only an ash. Until now, the only difference I noticed is that there are much less commands available, but then, I don't use it very often. So I have some questions. Is writing ash scripts different than writing bash scripts? Is there something specific to consider when doing it? When the script is ready, how can I deploy it? For bash, I just put the executable file under /usr/lib and run it by typing the file name into the command line, will this work with ash? Are there any special pitfalls to watch out for in the script I want to write? I think that the killing process part may get hairy, if I write something that kills the wrong process, but even then running the script shouldn't break anything permanently, right?

    Read the article

  • Windows Server (SBS) 2008 - Telephony service won't start (missing permissions)

    - by Uri
    I am running a SBS 2008 server. It's setup as the domain controller for the network. After a reboot, the Telephony service (and all services that depend on it) refuses to start under the Network Service account. The error given is: Error 1297: A privilege that the service requires to function properly does not exist in the service account configuration. You may use the Services Microsoft Management Console (MMC) snap-in (services.msc) and the Local Security Settings MMC snap-in (secpol.msc) to view the service configuration and the account configuration. This has caused all the network services not to be accessible e.g. terminal services, VPN (RRAS), SQL Server instances. The SSH daemon I have running on the box will accept connections only from localhost, but won't respond on the network. After searching around, the only advice I could find was to grant the Network Service account these permissions: Adjust memory quotas for a process Replace a process level token I set those permissions on both the Default Domain Policy and the Default Domain Controller Policy, but it seemingly had no effect. Most of the services will start if I change them to run under the Local System account, but that didn't make them accessible on the network. I even tried removing the Routing and Remote Access Services feature, rebooting and reinstalling it, but the issue remains. Any ideas?

    Read the article

  • Windows 7, network connection with no default gateway: any way to change the "Unknown network" statu

    - by e-t172
    Hi, I have a computer running Windows 7 Pro RTM. This computer has two network connections: A Wi-fi connection to the Internet (through a home router) which works just fine. An OpenVPN virtual network connection. More precisely, this is a virtual Ethernet connection which behaves exactly like a physical Ethernet wired connection. My problem is that the "Network and sharing center" shows "Unknown network" for the OpenVPN connection. After some research I found that logical networks (outside a domain) are identified by the MAC address of the default gateway of the connection. Problem is, the OpenVPN connection has no default gateway: it is a private network, so I don't need one... Consequently, the "Unknown network" is always considered public, so the firewall is always in "public mode", which I don't want. Plus, I can't rename "Unknown connection" or anything (which makes sense), so it is kinda ugly. My goal is to define a proper logical network for the OpenVPN connection with the private profile. I know of some workarounds (disable the firewall, modify security policy to make all unknown networks "private") but they're still workarounds. I just want my clients to connect to the VPN without having to disable their firewall settings, without changing global configuration with potential side-effects (the "security policy" solution) and without having to look at an ugly "Unknown connection" in the Network and sharing center. Is there any way I can do this? I tried to check what was going on in the registry (HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList is interesting), but I still didn't find a way to "force" the OpenVPN connection to be assigned to a logical network. Any help would be very appreciated. A related question showed up at Superuser: http://superuser.com/questions/37355/windows-7-cant-identify-network/37422

    Read the article

  • Using VLANs/subnetting to separate management from services?

    - by YouAreTheHat
    Background: I recently purchased a server and a managed switch for my home in the hopes of getting more experience and some fun toys to play with. The devices and appliances I either have or plan to have cover a broad spectrum: router, DD-WRT AP, Dell switch, OpenLDAP server, FreeRADIUS server, OpenVPN gateway, home PCs, gaming consoles, etc. I intend to segment my network with VLANs and associated subnets (e.g., VID10 is populated by devices on 192.168.10.0/24). The idea is to secure the more sensitive appliances by forcing traffic through my router/FW. Setup: After thinking and planning for some time, I have tentatively decided on 4 VLANs: one for the WAN connection, one for servers, one for home/personal devices, and one for management. In theory, the home VLAN will have limited access to the servers, and the management VLAN will be totally isolated for security. Question: Since I want to restrict access to management interfaces, but some appliances have to be accessible to other devices, is it possible/wise to have only management (SSH, HTTP, RDP) available on one VLAN/IP and only services (LDAP, DHCP, RADIUS, VPN) available on other? Is this a thing that is done? Does it gain me the security I think it does, or hurt me in some way?

    Read the article

  • Recommendations for secure business collaboration tools

    - by Michael Prescott
    I'm searching for a secure and easy way for business partners to collaboratively edit and exchange documents, share calendars, create schedules, and assign tasks. I speculate that the ideal collaboration environment or work-flow would actually involve several technologies and services. My co-workers and I have tried a variety of things from Google Apps to Wiki's, but nothing feels very fluid or complete. I suppose defining what we need and our constraints is probably in order: collaboratively edit basic text documents and spreadsheets exchange documents like flow-charts, graphs, and files generated by our other desktop applications, but not source code assign tasks to each other and ourselves and track the history of those tasks easily see when relevant documents have been modified since last viewing and ability to easily push notifications to relevant workers (a clean front page that shows updates would probably suffice) provide limited access to contract workers and guests users if a remote user system is compromised (keystroke logger or other spyware) we don't want the criminal to be able to gain access to all business documents (processes, trade-secrets, customer lists, etc.) simply because they gained access to a single Google account (or whatever web service) Cannot be a difficult to administer VPN infrastructure Cannot cost more than $100 per month (yeah, money is tight) Needs to support up to 25 users We can host our own web applications, but it must be low maintenance solution

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >