Search Results

Search found 744 results on 30 pages for 'sarah sides'.

Page 15/30 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Network communications mechanisms for SQL Server

    - by Akshay Deep Lamba
    Problem I am trying to understand how SQL Server communicates on the network, because I'm having to tell my networking team what ports to open up on the firewall for an edge web server to communicate back to the SQL Server on the inside. What do I need to know? Solution In order to understand what needs to be opened where, let's first talk briefly about the two main protocols that are in common use today: TCP - Transmission Control Protocol UDP - User Datagram Protocol Both are part of the TCP/IP suite of protocols. We'll start with TCP. TCP TCP is the main protocol by which clients communicate with SQL Server. Actually, it is more correct to say that clients and SQL Server use Tabular Data Stream (TDS), but TDS actually sits on top of TCP and when we're talking about Windows and firewalls and other networking devices, that's the protocol that rules and controls are built around. So we'll just speak in terms of TCP. TCP is a connection-oriented protocol. What that means is that the two systems negotiate the connection and both agree to it. Think of it like a phone call. While one person initiates the phone call, the other person has to agree to take it and both people can end the phone call at any time. TCP is the same way. Both systems have to agree to the communications, but either side can end it at any time. In addition, there is functionality built into TCP to ensure that all communications can be disassembled and reassembled as necessary so it can pass over various network devices and be put together again properly in the right order. It also has mechanisms to handle and retransmit lost communications. Because of this functionality, TCP is the protocol used by many different network applications. The way the applications all can share is through the use of ports. When a service, like SQL Server, comes up on a system, it must listen on a port. For a default SQL Server instance, the default port is 1433. Clients connect to the port via the TCP protocol, the connection is negotiated and agreed to, and then the two sides can transfer information as needed until either side decides to end the communication. In actuality, both sides will have a port to use for the communications, but since the client's port is typically determined semi-randomly, when we're talking about firewalls and the like, typically we're interested in the port the server or service is using. UDP UDP, unlike TCP, is not connection oriented. A "client" can send a UDP communications to anyone it wants. There's nothing in place to negotiate a communications connection, there's nothing in the protocol itself to coordinate order of communications or anything like that. If that's needed, it's got to be handled by the application or by a protocol built on top of UDP being used by the application. If you think of TCP as a phone call, think of UDP as a postcard. I can put a postcard in the mail to anyone I want, and so long as it is addressed properly and has a stamp on it, the postal service will pick it up. Now, what happens it afterwards is not guaranteed. There's no mechanism for retransmission of lost communications. It's great for short communications that doesn't necessarily need an acknowledgement. Because multiple network applications could be communicating via UDP, it uses ports, just like TCP. The SQL Browser or the SQL Server Listener Service uses UDP. Network Communications - Talking to SQL Server When an instance of SQL Server is set up, what TCP port it listens on depends. A default instance will be set up to listen on port 1433. A named instance will be set to a random port chosen during installation. In addition, a named instance will be configured to allow it to change that port dynamically. What this means is that when a named instance starts up, if it finds something already using the port it normally uses, it'll pick a new port. If you have a named instance, and you have connections coming across a firewall, you're going to want to use SQL Server Configuration Manager to set a static port. This will allow the networking and security folks to configure their devices for maximum protection. While you can change the network port for a default instance of SQL Server, most people don't. Network Communications - Finding a SQL Server When just the name is specified for a client to connect to SQL Server, for instance, MySQLServer, this is an attempt to connect to the default instance. In this case the client will automatically attempt to communicate to port 1433 on MySQLServer. If you've switched the port for the default instance, you'll need to tell the client the proper port, usually by specifying the following syntax in the connection string: <server>,<port>. For instance, if you moved SQL Server to listen on 14330, you'd use MySQLServer,14330 instead of just MySQLServer. However, because a named instance sets up its port dynamically by default, the client never knows at the outset what the port is it should talk to. That's what the SQL Browser or the SQL Server Listener Service (SQL Server 2000) is for. In this case, the client sends a communication via the UDP protocol to port 1434. It asks, "Where is the named instance?" So if I was running a named instance called SQL2008R2, it would be asking the SQL Browser, "Hey, how do I talk to MySQLServer\SQL2008R2?" The SQL Browser would then send back a communications from UDP port 1434 back to the client telling the client how to talk to the named instance. Of course, you can skip all of this of you set that named instance's port statically. Then you can use the <server>,<port> mechanism to connect and the client won't try to talk to the SQL Browser service. It'll simply try to make the connection. So, for instance, is the SQL2008R2 instance was listening on port 20080, specifying MySQLServer,20080 would attempt a connection to the named instance. Network Communications - Named Pipes Named pipes is an older network library communications mechanism and it's generally not used any longer. It shouldn't be used across a firewall. However, if for some reason you need to connect to SQL Server with it, this protocol also sits on top of TCP. Named Pipes is actually used by the operating system and it has its own mechanism within the protocol to determine where to route communications. As far as network communications is concerned, it listens on TCP port 445. This is true whether we're talking about a default or named instance of SQL Server. The Summary Table To put all this together, here is what you need to know: Type of Communication Protocol Used Default Port Finding a SQL Server or SQL Server Named Instance UDP 1434 Communicating with a default instance of SQL Server TCP 1433 Communicating with a named instance of SQL Server TCP * Determined dynamically at start up Communicating with SQL Server via Named Pipes TCP 445

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Let your Signature Experience drive IT-decision making

    - by Tania Le Voi
    Today’s CIO job description:  ‘’Align IT infrastructure and solutions with business goals and objectives ; AND while doing so reduce costs; BUT ALSO, be innovative, ensure the architectures are adaptable and agile as we need to act today on the changes that we may request tomorrow.”   Sound like an unachievable request? The fact is, reality dictates that CIO’s are put under this type of pressure to deliver more with less. In a past career phase I spent a few years as an IT Relationship Manager for a large Insurance company. This is a role that we see all too infrequently in many of our customers, and it’s a shame.  The purpose of this role was to build a bridge, a relationship between IT and the business. Key to achieving that goal was to ensure the same language was being spoken and more importantly that objectives were commonly understood - hence service and projects were delivered to time, to budget and actually solved the business problems. In reality IT and the business are already married, but the relationship is most often defined as ‘supplier’ of IT rather than a ‘trusted partner’. To deliver business value they need to understand how to work together effectively to attain this next level of partnership. The Business cannot compete if they do not get a new product to market ahead of the competition, or for example act in a timely manner to address a new industry problem such as a legislative change. An even better example is when the Application or Service fails and the Business takes a hit by bad publicity, being trending topics on social media and losing direct revenue from online channels. For this reason alone Business and IT need the alignment of their priorities and deliverables now more than ever! Take a look at Forrester’s recent study that found ‘many IT respondents considering themselves to be trusted partners of the business but their efforts are impaired by the inadequacy of tools and organizations’.  IT Meet the Business; Business Meet IT So what is going on? We talk about aligning the business with IT but the reality is it’s difficult to do. Like any relationship each side has different goals and needs and language can be a barrier; business vs. technology jargon! What if we could translate the needs of both sides into actionable information, backed by data both sides understand, presented in a meaningful way?  Well now we can with the Business-Driven Application Management capabilities in Oracle Enterprise Manager 12cR2! Enterprise Manager’s Business-Driven Application Management capabilities provide the information that IT needs to understand the impact of its decisions on business criteria.  No longer does IT need to be focused solely on speeds and feeds, performance and throughput – now IT can understand IT’s impact on business KPIs like inventory turns, order-to-cash cycle, pipeline-to-forecast, and similar.  Similarly, now the line of business can understand which IT services are most critical for the KPIs they care about. There are a good deal of resources on Oracle Technology Network that describe the functionality of these products, so I won’t’ rehash them here.  What I want to talk about is what you do with these products. What’s next after we meet? Where do you start? Step 1:  Identify the Signature Experience. This is THE business process (or set of processes) that is core to the business, the one that drives the economic engine, the process that a customer recognises the company brand for, reputation, the customer experience, the process that a CEO would state as his number one priority. The crème de la crème of your business! Once you have nailed this it gets easy as Enterprise Manager 12c makes it easy. Step 2:  Map the Signature Experience to underlying IT.  Taking the signature experience, map out the touch points of the components that play a part in ensuring this business transaction is successful end to end, think of it like mapping out a critical path; the applications, middleware, databases and hardware. Use the wealth of Enterprise Manager features such as Systems, Services, Business Application Targets and Business Transaction Management (BTM) to assist you. Adding Real User Experience Insight (RUEI) into the mix will make the end to end customer satisfaction story transparent. Work with the business and define meaningful key performance indicators (KPI’s) and thresholds to enable you to report and action upon. Step 3:  Observe the data over time.  You now have meaningful insight into every step enabling your signature experience and you understand the implication of that experience on your underlying IT.  Watch if for a few months, see what happens and reconvene with your business stakeholders and set clear and measurable targets which can re-define service levels.  Step 4:  Change the information about which you and the business communicate.  It’s amazing what happens when you and the business speak the same language.  You’ll be able to make more informed business and IT decisions. From here IT can identify where/how budget is spent whether on the level of support, performance, capacity, HA, DR, certification etc. IT SLA’s no longer need be focused on metrics such as %availability but structured around business process requirements. The power of this way of thinking doesn’t end here. IT staff get to see and understand how their own role contributes to the business making them accountable for the business service. Take a step further and appraise your staff on the business competencies that are linked to the service availability. For the business, the language barrier is removed by producing targeted reports on the signature experience core to the business and therefore key to the CEO. Chargeback or show back becomes easier to justify as the ‘cost of day per outage’ can be more easily calculated; the business will be able to translate the cost to the business to the cost/value of the underlying IT that supports it. Used this way, Oracle Enterprise Manager 12c is a key enabler to a harmonious relationship between the end customer the business and IT to deliver ultimate service and satisfaction. Just engage with the business upfront, make the signature experience visible and let Enterprise Manager 12c do the rest. In the next blog entry we will cover some of the Enterprise Manager features mentioned to enable you to implement this new way of working.  

    Read the article

  • What was missing from the Content Strategy Forum?

    - by Roger Hart
    In April, Paris hosted the first ever Content Strategy Forum. The event's website proudly proclaims: 170 attendees, 18 nationalities, 17 speakers, 1 volcano... Content Strategy Forum 2010 rocked the world! The volcano was in Iceland, and the closest we came to rocking the world was a cursory mention in the Huffington Post, but I'll grant the event was awesome. One thing missing from that list, however, is "94 companies" (Plus a couple of universities and freelancers, and what have you). A glance through the attendees directory reveals a fairly wide organisational turnout - 24 students from two Parisian universities, countless design and marketing agencies, a series of tech firms, small and large. Two delegates from IBM, two from ARM, an appearance from RIM, Skype, and Facebook; twelve from the various bits of eBay. Oh, and, err, nobody from Google, Microsoft, Yahoo, Amazon, Play, Twitter, LinkedIn, Craigslist, the BBC, no banks I noticed, and I didn't spot a newspaper. You get the idea. Facebook notwithstanding, you have to scroll through a few pages to Alexa rankings to find company names from the attendee list. I find this interesting, and I'm not wholly sure what to make of it. Of the large, web-centric, content-rich organizations conspicuously absent, at least one of two things is true: They didn't know about the event They didn't care about the event Maybe these guys all have content strategy completely sorted, and it's an utterly naturalised part of their business process. Maybe nobody at say, Apple or Play.com ever publishes a single piece of content that isn't neatly tailored to their (clearly defined, of course) user and business goals. Wouldn't that be lovely? The thing is, in that rosy and beatific world, there's still a case for those folks to join the community. There are bound to be other perspectives, and things to learn. You see, the other thing achingly conspicuous by its absence was case studies. In her keynote address, Kristina Halvorson made the point that what content strategy really needs is some big, loud success stories. A point I'd firmly second as a content strategist working within an organisation. Sarah Cancilla's presentation on content strategy at Facebook included some very neat, specific examples, and was richer for it. It didn't hurt that the example was Facebook - you're getting impressively big numbers off base. What about the other big boys? Is there anybody out there with a perspective? Do we all just look very silly to you, fretting away over text and images and users and purposes? Is content validation and maintenance so accustomed a part of your business that calling attention to it is like sniffing the air and saying "Hmm, a lot of nitrogen about today."? And if it is, do you have any wisdom to share?

    Read the article

  • How Big Data and Social Won the Election

    - by Mike Stiles
    The story of big data’s influence on the outcome of the US Presidential election is worth a good look, because a) it’s a harbinger of things to come, and b) it’s an example of similar successes available to any enterprise seriously resourcing integrated big data, modeling, and data-driven execution on all assets, including social. Obama campaign manager Jim Messina fielded a data and analytics brain trust 5 times larger than 2008. At that time, there were numerous databases from various sources, few of them talking to each other. This time, the mission was to be metrics-centered and measure everything measurable, and in context with all the other data. Big data showed them exactly what they needed to know and told them what to do about it. It showed them women 40-49 on the west coast would donate big money if they got to eat with George Clooney. Women on the east coast would pony up to hang out with Sarah Jessica Parker. Extensive daily modeling showed them what kinds of email appeals, from who, and to whom, would prove most successful in raising cash, recruiting volunteers, and getting out the vote. Swing state voters were profiled and approached with more customized targeting that at any time in history. Ads were purchased on specific shows watched by the targets, increasing efficiency 14% over traditional media buys. For all the criticism of the candidate’s focus on appearing on comedy and entertainment shows, and local radio morning shows, that’s where the data sent them to reach the voters most likely to turn out for them. And then there was social. Again, more than in any other election, Facebook was used for virtual, highly efficient door-to-door canvasing. Facebook fans got pictures of friends in swing states and were asked to encourage them to act. Using that approach, 1 in 5 peer-to-peer appeals led to the desired action. Assumptions, gut, intuition, campaign experience, all took a backseat to strategy shifts solidly backed up by data. Zeroing in on demographics likely to back the President and tracking their mood daily literally changed the voter landscape. The Romney team watched Obama voters appear seemingly out of thin air. One Obama campaign aide said, “We ran the election 66,000 times every night.” Which brings us to your organization. If you’re starting to feel like the battle-cry of “but this is the way we’ve always done it” is starting to put you in an extremely vulnerable position, you’re right. Social has become a key communication tool of the 21st century. Failing to use it, or failing to invest in a deep understanding of who your customers and prospects are so the content you post there will achieve desired actions and results, will leave you waking up one morning wondering, “What happened?”@mikestilesPhoto stock.xchng

    Read the article

  • Drawing a rectangular prism using opengl

    - by BadSniper
    I'm trying to learn opengl. I did some code for building a rectangular prism. I don't want to draw back faces so I used glCullFace(GL_BACK), glEnable(GL_CULL_FACE);. But I keep getting back faces also when viewing from front and also sometimes when rotating sides are vanishing. Can someone point me in right direction? glPolygonMode(GL_FRONT,GL_LINE); // draw wireframe polygons glColor3f(0,1,0); // set color green glCullFace(GL_BACK); // don't draw back faces glEnable(GL_CULL_FACE); // don't draw back faces glTranslatef(-10, 1, 0); // position glBegin(GL_QUADS); // face 1 glVertex3f(0,-1,0); glVertex3f(0,-1,2); glVertex3f(2,-1,2); glVertex3f(2,-1,0); // face 2 glVertex3f(2,-1,2); glVertex3f(2,-1,0); glVertex3f(2,5,0); glVertex3f(2,5,2); // face 3 glVertex3f(0,5,0); glVertex3f(0,5,2); glVertex3f(2,5,2); glVertex3f(2,5,0); // face 4 glVertex3f(0,-1,2); glVertex3f(2,-1,2); glVertex3f(2,5,2); glVertex3f(0,5,2); // face 5 glVertex3f(0,-1,2); glVertex3f(0,-1,0); glVertex3f(0,5,0); glVertex3f(0,5,2); // face 6 glVertex3f(0,-1,0); glVertex3f(2,-1,0); glVertex3f(2,5,0); glVertex3f(0,5,0); glEnd();

    Read the article

  • SQLAuthority News – Author Visit to Nepal TechMela – 2 Technical Sessions

    - by pinaldave
    Microsoft MDP Nepal is going to organize a Tech Mela for the IT community of Nepal on March 29 & 30, 2010 (2066 Chaitra 16 & 17), Monday and Tuesday,  at the Russian Center for Science & Culture, Kamalpokhari, Kathmandu. The objective of the event is to enhance and exchange knowledge about Information Technology, as well as Microsoft products and technologies, with the IT community. I am very excited to attend this one-of-a-kind event in Nepal. I will be giving two presentations in the said event, which includes: 1) Become An Efficient Developer – Learn The Tricks of SQL Server Management Studio (SSMS) There are so many features in SQL Server Management Studio that we may not know or use all of them. This presentation will impart tricks and tips of SQL Server Management Studio to the event goers. This aims to make you an efficient developer by having an edge over the other developers. 2) Good, Bad and Ugly: The Story of Index Index is often considered as the sure shot tool of improving the performance of any query. Learn the basics with examples and discover the good, bad and ugly sides of Index. This session will help you efficiently write queries in future. I am very excited to attend this special event as this is the very first time I will be presenting in technical sessions in Nepal. If you are in Nepal, I strongly suggest that you go to this once-in a lifetime IT fair. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Connecting Linux to WatchGuard Firebox SSL (OpenVPN client)

    Recently, I got a new project assignment that requires to connect permanently to the customer's network through VPN. They are using a so-called SSL VPN. As I am using OpenVPN since more than 5 years within my company's network I was quite curious about their solution and how it would actually be different from OpenVPN. Well, short version: It is a disguised version of OpenVPN. Unfortunately, the company only offers a client for Windows and Mac OS which shouldn't bother any Linux user after all. OpenVPN is part of every recent distribution and can be activated in a couple of minutes - both client as well as server (if necessary). WatchGuard Firebox SSL - About dialog Borrowing some files from a Windows client installation Initially, I didn't know about the product, so therefore I went through the installation on Windows 8. No obstacles (and no restart despite installation of TAP device drivers!) here and the secured VPN channel was up and running in less than 2 minutes or so. Much appreciated from both parties - customer and me. Of course, this whole client package and my long year approved and stable installation ignited my interest to have a closer look at the WatchGuard client. Compared to the original OpenVPN client (okay, I have to admit this is years ago) this commercial product is smarter in terms of file locations during installation. You'll be able to access the configuration and key files below your roaming application data folder. To get there, simply enter '%AppData%\WatchGuard\Mobile VPN' in your Windows/File Explorer and confirm with Enter/Return. This will display the following files: Application folder below user profile with configuration and certificate files From there we are going to borrow four files, namely: ca.crt client.crt client.ovpn client.pem and transfer them to the Linux system. You might also be able to isolate those four files from a Mac OS client. Frankly, I'm just too lazy to run the WatchGuard client installation on a Mac mini only to find the folder location, and I'm going to describe why a little bit further down this article. I know that you can do that! Feedback in the comment section is appreciated. Configuration of OpenVPN (console) Depending on your distribution the following steps might be a little different but in general you should be able to get the important information from it. I'm going to describe the steps in Ubuntu 13.04 (Raring Ringtail). As usual, there are two possibilities to achieve your goal: console and UI. Let's what it is necessary to be done. First of all, you should ensure that you have OpenVPN installed on your system. Open your favourite terminal application and run the following statement: $ sudo apt-get install openvpn network-manager-openvpn network-manager-openvpn-gnome Just to be on the safe side. The four above mentioned files from your Windows machine could be copied anywhere but either you place them below your own user directory or you put them (as root) below the default directory: /etc/openvpn At this stage you would be able to do a test run already. Just in case, run the following command and check the output (it's the similar information you would get from the 'View Logs...' context menu entry in Windows: $ sudo openvpn --config client.ovpn Pay attention to the correct path to your configuration and certificate files. OpenVPN will ask you to enter your Auth Username and Auth Password in order to establish the VPN connection, same as the Windows client. Remote server and user authentication to establish the VPN Please complete the test run and see whether all went well. You can disconnect pressing Ctrl+C. Simplifying your life - authentication file In my case, I actually set up the OpenVPN client on my gateway/router. This establishes a VPN channel between my network and my client's network and allows me to switch machines easily without having the necessity to install the WatchGuard client on each and every machine. That's also very handy for my various virtualised Windows machines. Anyway, as the client configuration, key and certificate files are located on a headless system somewhere under the roof, it is mandatory to have an automatic connection to the remote site. For that you should first change the file extension '.ovpn' to '.conf' which is the default extension on Linux systems for OpenVPN, and then open the client configuration file in order to extend an existing line. $ sudo mv client.ovpn client.conf $ sudo nano client.conf You should have a similar content to this one here: dev tunclientproto tcp-clientca ca.crtcert client.crtkey client.pemtls-remote "/O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server"remote-cert-eku "TLS Web Server Authentication"remote 1.2.3.4 443persist-keypersist-tunverb 3mute 20keepalive 10 60cipher AES-256-CBCauth SHA1float 1reneg-sec 3660nobindmute-replay-warningsauth-user-pass auth.txt Note: I changed the IP address of the remote directive above (which should be obvious, right?). Anyway, the required change is marked in red and we have to create a new authentication file 'auth.txt'. You can give the directive 'auth-user-pass' any file name you'd like to. Due to my existing OpenVPN infrastructure my setup differs completely from the above written content but for sake of simplicity I just keep it 'as-is'. Okay, let's create this file 'auth.txt' $ sudo nano auth.txt and just put two lines of information in it - username on the first, and password on the second line, like so: myvpnusernameverysecretpassword Store the file, change permissions, and call openvpn with your configuration file again: $ sudo chmod 0600 auth.txt $ sudo openvpn --config client.conf This should now work without being prompted to enter username and password. In case that you placed your files below the system-wide location /etc/openvpn you can operate your VPNs also via service command like so: $ sudo service openvpn start client $ sudo service openvpn stop client Using Network Manager For newer Linux users or the ones with 'console-phobia' I'm going to describe now how to use Network Manager to setup the OpenVPN client. For this move your mouse to the systray area and click on Network Connections => VPN Connections => Configure VPNs... which opens your Network Connections dialog. Alternatively, use the HUD and enter 'Network Connections'. Network connections overview in Ubuntu Click on 'Add' button. On the next dialog select 'Import a saved VPN configuration...' from the dropdown list and click on 'Create...' Choose connection type to import VPN configuration Now you navigate to your folder where you put the client files from the Windows system and you open the 'client.ovpn' file. Next, on the tab 'VPN' proceed with the following steps (directives from the configuration file are referred): General Check the IP address of Gateway ('remote' - we used 1.2.3.4 in this setup) Authentication Change Type to 'Password with Certificates (TLS)' ('auth-pass-user') Enter User name to access your client keys (Auth Name: myvpnusername) Enter Password (Auth Password: verysecretpassword) and choose your password handling Browse for your User Certificate ('cert' - should be pre-selected with client.crt) Browse for your CA Certificate ('ca' - should be filled as ca.crt) Specify your Private Key ('key' - here: client.pem) Then click on the 'Advanced...' button and check the following values: Use custom gateway port: 443 (second value of 'remote' directive) Check the selected value of Cipher ('cipher') Check HMAC Authentication ('auth') Enter the Subject Match: /O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server ('tls-remote') Finally, you have to confirm and close all dialogs. You should be able to establish your OpenVPN-WatchGuard connection via Network Manager. For that, click on the 'VPN Connections => client' entry on your Network Manager in the systray. It is advised that you keep an eye on the syslog to see whether there are any problematic issues that would require some additional attention. Advanced topic: routing As stated above, I'm running the 'WatchGuard client for Linux' on my head-less server, and since then I'm actually establishing a secure communication channel between two networks. In order to enable your network clients to get access to machines on the remote side there are two possibilities to enable that: Proper routing on both sides of the connection which enables both-direction access, or Network masquerading on the 'client side' of the connection Following, I'm going to describe the second option a little bit more in detail. The Linux system that I'm using is already configured as a gateway to the internet. I won't explain the necessary steps to do that, and will only focus on the additional tweaks I had to do. You can find tons of very good instructions and tutorials on 'How to setup a Linux gateway/router' - just use Google. OK, back to the actual modifications. First, we need to have some information about the network topology and IP address range used on the 'other' side. We can get this very easily from /var/log/syslog after we established the OpenVPN channel, like so: $ sudo tail -n20 /var/log/syslog Or if your system is quite busy with logging, like so: $ sudo less /var/log/syslog | grep ovpn The output should contain PUSH received message similar to the following one: Jul 23 23:13:28 ios1 ovpn-client[789]: PUSH: Received control message: 'PUSH_REPLY,topology subnet,route 192.168.1.0 255.255.255.0,dhcp-option DOMAIN ,route-gateway 192.168.6.1,topology subnet,ping 10,ping-restart 60,ifconfig 192.168.6.2 255.255.255.0' The interesting part for us is the route command which I highlighted already in the sample PUSH_REPLY. Depending on your remote server there might be multiple networks defined (172.16.x.x and/or 10.x.x.x). Important: The IP address range on both sides of the connection has to be different, otherwise you will have to shuffle IPs or increase your the netmask. {loadposition content_adsense} After the VPN connection is established, we have to extend the rules for iptables in order to route and masquerade IP packets properly. I created a shell script to take care of those steps: #!/bin/sh -eIPTABLES=/sbin/iptablesDEV_LAN=eth0DEV_VPNS=tun+VPN=192.168.1.0/24 $IPTABLES -A FORWARD -i $DEV_LAN -o $DEV_VPNS -d $VPN -j ACCEPT$IPTABLES -A FORWARD -i $DEV_VPNS -o $DEV_LAN -s $VPN -j ACCEPT$IPTABLES -t nat -A POSTROUTING -o $DEV_VPNS -d $VPN -j MASQUERADE I'm using the wildcard interface 'tun+' because I have multiple client configurations for OpenVPN on my server. In your case, it might be sufficient to specify device 'tun0' only. Simplifying your life - automatic connect on boot Now, that the client connection works flawless, configuration of routing and iptables is okay, we might consider to add another 'laziness' factor into our setup. Due to kernel updates or other circumstances it might be necessary to reboot your system. Wouldn't it be nice that the VPN connections are established during the boot procedure? Yes, of course it would be. To achieve this, we have to configure OpenVPN to automatically start our VPNs via init script. Let's have a look at the responsible 'default' file and adjust the settings accordingly. $ sudo nano /etc/default/openvpn Which should have a similar content to this: # This is the configuration file for /etc/init.d/openvpn## Start only these VPNs automatically via init script.# Allowed values are "all", "none" or space separated list of# names of the VPNs. If empty, "all" is assumed.# The VPN name refers to the VPN configutation file name.# i.e. "home" would be /etc/openvpn/home.conf#AUTOSTART="all"#AUTOSTART="none"#AUTOSTART="home office"## ... more information which remains unmodified ... With the OpenVPN client configuration as described above you would either set AUTOSTART to "all" or to "client" to enable automatic start of your VPN(s) during boot. You should also take care that your iptables commands are executed after the link has been established, too. You can easily test this configuration without reboot, like so: $ sudo service openvpn restart Enjoy stable VPN connections between your Linux system(s) and a WatchGuard Firebox SSL remote server. Cheers, JoKi

    Read the article

  • Packet loss rate with iperf and tcpdump

    - by stefita
    I tested a line for its link quality with iperf. The measured speed (UDP port 9005) was 96Mbps, which is fine, because both servers are connected with 100Mbps to the internet. On the other hand the datagram loss rate was shown to be 3.3-3.7%, which I found a little too much. Using a high-speed transfer protocol I recorded the packets on both sides with tcpdump. Than I calculated the packet loss - average 0.25%. Have anyone an explanation, where this big difference may be coming from? What is an acceptable packet loss in your opinion?

    Read the article

  • Best way to automatically synchronize files between Linux and Windows

    - by Gregory
    My first choice was rsync but it caused some issues and is too manual. My second choice, currently under evaluation is Unison. Are there any other good options for bi-directional auto-syncing? The synching tool cannot add it's own files to the directories to be synched. Which removes CVS/SVN as a choice. Plus they are too manual. The requirements are user-level program on both sides, no root account access available. Only scanning on linux. On windows it could be a virtual drive/path. Very fast and efficient like rsync. Some other requirements include: machines are not on the same network, files cannot fall into the wrong hands, nor can they be handled by 3rd parties, this pretty much excludes all online storage sites.

    Read the article

  • IIS7 failover cluster across datacenters

    - by Scott
    Hello, I have servers in two different datacenters with each datacenter getting static IPs. What I would like to do is setup the servers as IIS7 servers and allowing them to failover from datacenter to datacenter with little (or preferably) no interruption. Servers on both sides are running Windows Server 2008 x64 with IIS7 (or 7.5 if needed). I am interested in how to point DNS traffic to the new datacenter without manual human intervention. For example: Datacenter A: IP: 192.168.1.115 Servers: Server 2008 x64 w/ IIS 7 Datacenter B: IP: 192.168.1.220 Servers: Server 2008 x64 w/ IIS 7 Other information: Domain Name: Example.org Domain DNS: 192.168.1.115 If Datacenter A connectivity went down (broken service line, etc.) how does the traffic know to route to Datacenter B on 192.168.1.220? Thanks, Scott

    Read the article

  • Free Windows Azure event next Monday in London (29th March)

    - by Eric Nelson
    I just heard that we still have spaces for this event happening next week (29th March 2010). Whilst the event is designed for start-ups, I’m sure nobody would notice if you snuck in :-) Just keep it to yourself ;-) Register using invitation code: 79F2AB. Hope to see you there. The agenda is looking pretty swish: 09:00 – 09:30 Registration 09:30 - 10:15 Keynote  ‘I’ve looked at clouds from both sides now....’– John Taysom, Active Seed Investor 10:15 - 10:45   The Microsoft Vision for Cloud Computing – Steve Clayton, Director Software + Services, EMEA 10:45 - 11:00   Break 11:00 - 12:30 “Windows Azure in Real World” – hear from startups that have built their business around the Azure platform, moderated by Alistair Beagley, Azure UK Developer and Platform Lead 12:30 - 13:15 Lunch and networking  13:15 - 14:15  Breakout Tracks, moderated by our Azure Experts 1. Windows Azure Technical Overview - David Gristwood, Application Architect, Microsoft 2. SQL Azure Technical Overview – Eric Nelson, Application Architect, Microsoft 3. Commercial insight into Windows Azure and what this means for BizSpark Start-ups - Simon Karn, Commercial Lead, UK Windows Azure Incubation Team, Microsoft 14:15 - 14:30 Session change over 14:30 - 15:30   Breakout Tracks, moderated by our Azure Experts 1. SQL Azure Technical Overview (repeat) - Eric Nelson, Application Architect, Microsoft 2. Deep dive into Windows Azure – Neil Kidd, Architect, Microsoft Technology Centre 3. Lessons Learnt - Windows Azure in the Real World interactive session – Two customers hosted by Matt Deacon, Enterprise Architect, Microsoft 15:30 - 16:00 Break & Session change over 16:00 - 17:00 Breakout Tracks, moderated by our Azure Experts 1. PHP / Ruby on Azure Simon Davies, Architect, UK Windows Azure Incubation Team, Microsoft 2. Commercial insight into Windows Azure and what this means for BizSpark Start-ups (repeat) - Simon Karn, Commercial Lead, UK Windows Azure Incubation Team, Microsoft 3. Lessons Learnt - Windows Azure in the Real World interactive session #2 Two customers hosted by Matt Deacon, Enterprise Architect, Microsoft 17:00 - 18:00 Pitches and Judging 18:15 Wrap-up and close 18:15 - 20:00 Drinks & Networking

    Read the article

  • Pivotal Announces JSR-352 Compliance for Spring Batch

    - by reza_rahman
    Pivotal, the company currently funding development of the popular Spring Framework, recently announced JSR 352 (aka Batch Applications for the Java Platform) compliance for the Spring Batch project. More specifically, Spring Batch targets JSR-352 Java SE runtime compatibility rather than Java EE runtime compatibility. If you are surprised that APIs included in Java EE can pass TCKs targeted for Java SE, you should not be. Many other Java EE APIs target compatibility in Java SE environments such as JMS and JPA. You can read about Spring Batch's support for JSR-352 here as well as the Spring configuration to get JSR-352 working in Spring (typically a very low level implementation concern intended to be completely transparent to most JSR-352 users). JSR 352 is one of the few very encouraging cases of major active contribution to the Java EE standard from the Spring development team (the other major effort being Rod Johnson's co-leadership of JSR 330 along with Bob Lee). While IBM's Christopher Vignola led the spec and contributed IBM's years of highly mission critical batch processing experience from products like WebSphere Compute Grid and z/OS batch, the Spring team provided major influences to the API in particular for the chunk processing, listeners, splits and operational interfaces. The GlassFish team's own Mahesh Kannan also contributed, in particular by implementing much of the Java EE integration work for the reference implementation. This was an excellent example of multilateral engineering collaboration through the standards process. For many complex reasons it is not too hard to find evidence of less than amicable interaction between the Spring ecosystem and the Java EE standard over the years if one cares to dig deep enough. In reality most developers see Spring and Java EE as two sides of the same server-side Java coin. At the core Spring and Java EE ecosystems have always shared deep undercurrents of common user bases, bi-directional flows of ideas and perhaps genuine if not begrudging mutual respect. We can all hope for continued strength for both ecosystems and graceful high notes of collaboration via efforts like JSR 352.

    Read the article

  • Dual monitors won't accept new settings

    - by mschulze
    I'm trying to use dual monitors but every time I try to change the display settings(external monitor is on the right of the laptop, not the left etc...) my screens go black, then my laptop screen will come back "normally" but my external monitor will just flash different shades of red or black. I cannot interact with anything on my laptop screen even though I can move my mouse around. Even when the monitors attempt to revert back the external monitor continues to flicker instead of reverting. The only way to stop it is to hard-boot but the new settings aren't saved. Also, when the dual monitors are both working my laptop screen has two vertical black bars along its sides. It's like Natty just decided I couldn't use the outer inch and a half of my laptop screen. My external monitor doesn't have this problem, and my laptop uses its full screen size when the external monitor is not hooked up. Does anyone know what I'm doing wrong? I have an HP Pavillion dv7 and an HP w2338h external monitor. Thanks!

    Read the article

  • problem with monitor

    - by misha nesterenko
    I have an Asus VB191T connected to notebook via DVI connector as secondary monitor. I am observing some problems on that monitor. Photo of a properly functioning screen: Photo of my Asus VB191T screen: So as you can see there are white lines on both sides of black ones. Resolution is set to the monitor's native resolution which is 1280x1024. The artifacts don't appear for every color, and they show up the clearest where there is black on a grey background. What could be wrong? The monitor itself? Perhaps the connector?

    Read the article

  • FTP upload stalls at same point every time on FileZilla

    - by John
    On two different FTP accounts, I am having problems uploading files. I can login and see the contents of the dir, and start an upload. Using Filezilla the transfer seems to always stall at either 0.9% or 1.2% (always those two numbers) and may simply hang, or keep restarting and then again stop at the same point. WindowsXP FTP is not great but I get similar types of problems there... it starts uploading and after a short while I get a timeout error. FTP used to work fine, and I don't know if it's these accounts in particular (both have the same service provider although purchased on opposite sides of the world) or if "FTP is broken on my PC"... can that even happen?!

    Read the article

  • Linux Transparent Bridge for Network

    - by Blackninja543
    I am attempting to set up a semi-transparent bridge. I say semi because I want it to act as a transparent tap for all traffic moving through both sides of the bridge. What I also want is to have the "green zone" accessible to a web interface for the bridge that will display all results of the IDS and other network monitoring tools. My example would be as such: eth0 <--> bridge(br0) <--> eth1 The entire network would be on the same subset however anything coming from eth0 to eth1 would be accepted. The only time anything would be drop is if the eth0 attempted to access br0. If someone attempts to access the web interface on br0 through eth1 it will succeed. My biggest problem I feel is if I attempt to block anything from eth0 to br0 this will drop the bridge all together.

    Read the article

  • The Future of Life Assurance Conference Recap

    - by [email protected]
    I recently wrote about the Life Insurance Conference held in Washington, DC last month. This week I was both an attendee and guest speaker the 13th Annual Future of Life Assurance Conference held at The Guoman Tower in London, UK. It's amazing that these two conferences were held on opposition sides of the Atlantic Ocean and addressed many of the same session topics and themes. Insurance is certainly a global industry! This year's conference was attended by many of the leading carriers and CEOs in the UK and across Europe.The sessions included a strong lineup of keynote speakers and panel discussions from carriers such as Legal & General, Skandia, Aviva, Standard Life, Friends Provident, LV=, Zurich UK, Barclays and Scottish Life. Sessions topics addressed a variety of business and regulatory issues including: Ensuring a profitable future Key priorities in regulation The future of advice The impact of the RDR on distribution Bancassurance Gaining control of the customer relationship Revitalizing product offerings In addition, Oracle speakers (Glenn Lottering and myself) led specific sessions on gearing up for Solvency II and speeding product development through adaptive rules-based systems. The main themes that played throughout many of the sessions included: change is here, focusing on customers, the current economic crisis has been challenging and the industry needs to get back to the basics and simplify - simplify - simplify. Additionally, it is clear that the UK Life & Pension markets will be going through some major changes as new RDR regulation related to advisor fees and commission and automatic enrollment are rolled out in 2012 Roger A.Soppe, CLU, LUTCF, is the Senior Director of Insurance Strategy, Oracle Insurance.

    Read the article

  • Can you swap dpi buttons on the Astra Dragon War mouse with 4th and 5th buttons?

    - by Denny Nuyts
    I'm left-handed and I'm in need of a good computer mouse with at least five buttons. However, most computer mice on the market are sadly right-handed and very impracticable to use with the extra buttons on my pinkie side instead of on my thumb side. The Dragon War Astra has two buttons on both sides. Buttons 4 and 5 on the left side and the dpi-buttons on the right side. If I were just able to re-assign them so they swap positions I'd have a great left-handed mouse. Sadly, the program X-Mouse Button Control doesn't allow the user to re-assign dpi buttons. My question is whether there exist other methods to still get it to work for me (third party programs, perhaps?). Or should I get another gaming mouse?

    Read the article

  • Recording Skype Video

    - by Leah Peterson
    I have two problems - When doing a Skype video call, I can't see the other person. Skype says both my input and output are fast/good. I need a good recording program that will save both sides at once in a podcast. I've been playing around with Vodburner, but it seems to be very temperamental. I can't get anything longer than 30 seconds to save and even when if it does save, the file isn't compatible with Windows Live Movie Maker, even when I change the extension to .MP4 or .WMV. I tried to downlowd the format changer http://www.mediacoderhq.com/ that Vodburner suggests, but only got errors. I'm using Windows 7 with 62 GB free.

    Read the article

  • How can i be sure that professional programming is not for me ?

    - by user17766
    Hello everybody I love programming, developing projects for hobby and learning new concepts. I am getting harder too much in current job Despite learnt many thing well. I can even hardly understand assigned tasks. I am asking why i am getting harder to myself. It may not my fault? Our architecture doesn't spend enough time to explain complicated sides of project for us or i am not enough smart one for understand fastly. Our architecture also doesn't know what kind of hell he is creating ? Seeing 3 level generic types and 4-5 level generic inheritance in domain model objects hell makes me think so really. It looks abusing concepts more than reduce complexity. Thinking that he hasn't experienced before such a big project while he is getting confused in problems of the project. May i am not in right company ? May i am not good programmer ? May i am really stupid ? Become good in programming concepts is not enough to deal big project's complications so someone should to tell me that i have to still effort too much even i am good programmer for adopting myself to any big project ? Also i had another bad experiences from previous job but my professional experiences is almost few months but i spend 2 years for learning and coding for fun and i really can say that i have well skills on OOP, Design Patterns, coding standards and deep knowledge in language currently used. Sometimes i am thinking to leave programming professionally and work in any lame job while doing programming just for hobby. Waiting suggestions and insights

    Read the article

  • How can I force the display of image "handles" in Microsoft Word 2010?

    - by Matt
    In order to select images in Microsoft Word documents you need to get the cursor just right so that it turns into the "+" arrow icon, at which point you can click to select the image. When your cursor is not in exactly the right spot you see something like this (note that the letter "m" shown in the picture is an image, not a font): When your cursor is in an appropriate spot you see something like this: For simple images with relatively straight and simple borders, it's easy; you hover over the image and you get the "+" arrow. But for smaller, more intricate images with many sides, thin borders or perhaps transparency it's often madness as you move your cursor all over the image struggling to find the teenie little spot that Word deems is selectable. Is there some means of enabling the display of "handles" (maybe wrong term) around images before you select them, so you can see the selectable spots without hunting and pecking for them?

    Read the article

  • Future Air Plane – A new world

    - by Rekha
    For the first time in my life, I wished I had more number of years to live. The world has evolved from the cave man life to the man who is almost The Creator. When I was about 12 years old, I was taken to Chennai Planetarium for my school excursion. That day we were made to lie down in a dark room and the ceiling was full of stars and planets. All those were just videos but the day still stands in my mind. Same kind of experience in real is waiting for our future generations.Even though the English movies have gone beyond imaginations, we still have chances to bring those imaginations to real. You must be wondering why all these hype. Recently Airbus unveiled a news on transparent Airplane in 2050. This Airplane will have a body transparent to view the sky from all sides of the airplane when we are flying high above the grounds. And it will have all possible technologies under one roof that would give immense pleasure for the passengers. The journey would be an unforgettable one for each one of us. Image and News Credit: Daily Telegraph This article titled,Future Air Plane – A new world, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • PDF booklet printing

    - by Diego
    I have a Samsung ML-2851nd printer (laser, duplex) When printing booklets from PDF files, what is best? Printing with "standard" Page Scaling from Reader, and selecting Booklet printing from the Printer Properties Using the Booklet Printing option from Reader, and only selecting "Print on Both Sides" in the Printer Properties. If I go with the first option, can I use Page Scaling "None" to get bigger text or will it cause any problems? (Fit to printable area shrinks to 93%, I'm using A4 paper) If I go with the second one, what's the correct setting: "Flip on Long Edge" or "Flip on Short Edge"? Thanks!

    Read the article

  • SOHO Netflix and network security

    - by TW
    I want to use WIFI for HiDef video, but I don't trust it for my office PC's. I've heard of VLANs but I have no idea how to set it up or what (SOHO) hardware to buy. Other than getting 2 different DSL lines, how can I be absolutely sure that the PC side doesn't get hacked? What if I want to use MS Home server as a backup device for both sides? Can I make it "read only" for the PC side, and physically change the cable if I need to restore? TW

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >