Search Results

Search found 800 results on 32 pages for 'forcing'.

Page 1/32 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Forcing an External Activation with Service Broker

    - by Davide Mauri
    In these last days I’ve been working quite a lot with Service Broker, a technology I’m really happy to work with, since it can give a lot of satisfaction. The scale-out solution one can easily build is simply astonishing. I’m helping a company to build a very scalable and – yet almost inexpensive – invoicing system that has to be able to scale out using commodity hardware. To offload the work from the main server to satellite “compute nodes” (yes, I’ve borrowed this term from PDW) we’re using Service Broker and the External Activator application available in the SQL Server Feature Pack. For those who are not used to work with SSB, the External Activation is a feature that allows you to intercept the arrival of a message in a queue right from your application code. http://msdn.microsoft.com/en-us/library/ms171617.aspx (Look for “Event-Based Activation”) In order to make life even more easier, Microsoft released the External Activation application that saves you even from writing even this code. http://blogs.msdn.com/b/sql_service_broker/archive/tags/external+activator/ The External Activator application can be configured to execute your own application so that each time a message – an invoice in my case – arrives in the target queue, the invoking application is executed and the invoice is calculated. The very nice feature of External Activator is that it can automatically execute as many configured application in order to process as many messages as your system can handle.  This also a lot of create a scale-out solution, leaving to the developer only a fraction of the problems that usually came with asynchronous programming. Developers are also shielded from Service Broker since everything can be encapsulated in Stored Procedures, so that – for them – developing such scale-out asynchronous solution is not much more complex than just executing a bunch of Stored Procedures. Now, if everything works correctly, you don’t have to bother of anything else. You put messages in the queue and your application, invoked by the External Activator, process them. But what happen if for some reason your application fails to process the messages. For examples, it crashes? The message is safe in the queue so you just need to process it again. But your application is invoked by the External Activator application, so now the question is, how do you wake up that app? Service Broker will engage the activation process only if certain conditions are met: http://msdn.microsoft.com/en-us/library/ms171601.aspx But how we can invoke the activation process manually, without having to wait for another message to arrive (the arrival of a new message is a condition that can fire the activation process)? The “trick” is to do manually with the activation process does: sending a system message to a queue in charge of handling External Activation messages: declare @conversationHandle uniqueidentifier; declare @n xml = N' <EVENT_INSTANCE>   <EventType>QUEUE_ACTIVATION</EventType>   <PostTime>' + CONVERT(CHAR(24),GETDATE(),126) + '</PostTime>   <SPID>' + CAST(@@SPID AS VARCHAR(9)) + '</SPID>   <ServerName>[your_server_name]</ServerName>   <LoginName>[your_login_name]</LoginName>   <UserName>[your_user_name]</UserName>   <DatabaseName>[your_database_name]</DatabaseName>   <SchemaName>[your_queue_schema_name]</SchemaName>   <ObjectName>[your_queue_name]</ObjectName>   <ObjectType>QUEUE</ObjectType> </EVENT_INSTANCE>' begin dialog conversation     @conversationHandle from service        [<your_initiator_service_name>] to service          '<your_event_notification_service>' on contract         [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] with     encryption = off,     lifetime = 6000 ; send on conversation     @conversationHandle message type     [http://schemas.microsoft.com/SQL/Notifications/EventNotification] (@n) ;     end conversation @conversationHandle; That’s it! Put the code in a Stored Procedure and you can add to your application a button that says “Force Queue Processing” (or something similar) in order to start the activation process whenever you need it (which should not occur too frequently but it may happen). PS I know that the “fire-and-forget” (ending the conversation without waiting for an answer) technique is not a best practice, but in this case I don’t see how it can hurts so I decided to stay very close to the KISS principle []

    Read the article

  • Interface contracts – forcing code contracts through interfaces

    - by DigiMortal
    Sometimes we need a way to make different implementations of same interface follow same rules. One option is to duplicate contracts to all implementation but this is not good option because we have duplicated code then. The other option is to force contracts to all implementations at interface level. In this posting I will show you how to do it using interface contracts and contracts class. Using code from previous example about unit testing code with code contracts I will go further and force contracts at interface level. Here is the code from previous example. Take a careful look at it because I will talk about some modifications to this code soon. public interface IRandomGenerator {     int Next(int min, int max); }   public class RandomGenerator : IRandomGenerator {     private Random _random = new Random();       public int Next(int min, int max)     {         return _random.Next(min, max);     } }    public class Randomizer {     private IRandomGenerator _generator;       private Randomizer()     {         _generator = new RandomGenerator();     }       public Randomizer(IRandomGenerator generator)     {         _generator = generator;     }       public int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           return _generator.Next(min, max);     } } If we look at the GetRandomFromRangeContracted() method we can see that contracts set in this method are applicable to all implementations of IRandomGenerator interface. Although we can write new implementations as we want these implementations need exactly the same contracts. If we are using generators somewhere else then code contracts are not with them anymore. To solve the problem we will force code contracts at interface level. NB! To make the following code work you must enable Contract Reference Assembly building from project settings. Interface contracts and contracts class Interface contains no code – only definitions of members that implementing type must have. But code contracts must be defined in body of member they are part of. To get over this limitation, code contracts are defined in separate contracts class. Interface is bound to this class by special attribute and contracts class refers to interface through special attribute. Here is the IRandomGenerator with contracts and contracts class. Also I write simple fake so we can test contracts easily based only on interface mock. [ContractClass(typeof(RandomGeneratorContracts))] public interface IRandomGenerator {     int Next(int min, int max); }   [ContractClassFor(typeof(IRandomGenerator))] internal sealed class RandomGeneratorContracts : IRandomGenerator {     int IRandomGenerator.Next(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(                 min < max,                 "Min must be less than max"             );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           return default(int);     } }   public class RandomFake : IRandomGenerator {     private int _testValue;       public RandomGen(int testValue)     {         _testValue = testValue;     }       public int Next(int min, int max)     {         return _testValue;     } } To try out these changes use the following code. var gen = new RandomFake(3);   try {     gen.Next(10, 1); } catch(Exception ex) {     Debug.WriteLine(ex.Message); }   try {     gen.Next(5, 10); } catch(Exception ex) {     Debug.WriteLine(ex.Message); } Now we can force code contracts to all types that implement our IRandomGenerator interface and we must test only the interface to make sure that contracts are defined correctly.

    Read the article

  • Forcing an External Activation with Service Broker

    - by Davide Mauri
    In these last days I’ve been working quite a lot with Service Broker, a technology I’m really happy to work with, since it can give a lot of satisfaction. The scale-out solution one can easily build is simply astonishing. I’m helping a company to build a very scalable and – yet almost inexpensive – invoicing system that has to be able to scale out using commodity hardware. To offload the work from the main server to satellite “compute nodes” (yes, I’ve borrowed this term from PDW) we’re using Service Broker and the External Activator application available in the SQL Server Feature Pack. For those who are not used to work with SSB, the External Activation is a feature that allows you to intercept the arrival of a message in a queue right from your application code. http://msdn.microsoft.com/en-us/library/ms171617.aspx (Look for “Event-Based Activation”) In order to make life even more easier, Microsoft released the External Activation application that saves you even from writing even this code. http://blogs.msdn.com/b/sql_service_broker/archive/tags/external+activator/ The External Activator application can be configured to execute your own application so that each time a message – an invoice in my case – arrives in the target queue, the invoking application is executed and the invoice is calculated. The very nice feature of External Activator is that it can automatically execute as many configured application in order to process as many messages as your system can handle.  This also a lot of create a scale-out solution, leaving to the developer only a fraction of the problems that usually came with asynchronous programming. Developers are also shielded from Service Broker since everything can be encapsulated in Stored Procedures, so that – for them – developing such scale-out asynchronous solution is not much more complex than just executing a bunch of Stored Procedures. Now, if everything works correctly, you don’t have to bother of anything else. You put messages in the queue and your application, invoked by the External Activator, process them. But what happen if for some reason your application fails to process the messages. For examples, it crashes? The message is safe in the queue so you just need to process it again. But your application is invoked by the External Activator application, so now the question is, how do you wake up that app? Service Broker will engage the activation process only if certain conditions are met: http://msdn.microsoft.com/en-us/library/ms171601.aspx But how we can invoke the activation process manually, without having to wait for another message to arrive (the arrival of a new message is a condition that can fire the activation process)? The “trick” is to do manually with the activation process does: sending a system message to a queue in charge of handling External Activation messages: declare @conversationHandle uniqueidentifier; declare @n xml = N' <EVENT_INSTANCE>   <EventType>QUEUE_ACTIVATION</EventType>   <PostTime>' + CONVERT(CHAR(24),GETDATE(),126) + '</PostTime>   <SPID>' + CAST(@@SPID AS VARCHAR(9)) + '</SPID>   <ServerName>[your_server_name]</ServerName>   <LoginName>[your_login_name]</LoginName>   <UserName>[your_user_name]</UserName>   <DatabaseName>[your_database_name]</DatabaseName>   <SchemaName>[your_queue_schema_name]</SchemaName>   <ObjectName>[your_queue_name]</ObjectName>   <ObjectType>QUEUE</ObjectType> </EVENT_INSTANCE>' begin dialog conversation     @conversationHandle from service        [<your_initiator_service_name>] to service          '<your_event_notification_service>' on contract         [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] with     encryption = off,     lifetime = 6000 ; send on conversation     @conversationHandle message type     [http://schemas.microsoft.com/SQL/Notifications/EventNotification] (@n) ;     end conversation @conversationHandle; That’s it! Put the code in a Stored Procedure and you can add to your application a button that says “Force Queue Processing” (or something similar) in order to start the activation process whenever you need it (which should not occur too frequently but it may happen). PS I know that the “fire-and-forget” (ending the conversation without waiting for an answer) technique is not a best practice, but in this case I don’t see how it can hurts so I decided to stay very close to the KISS principle []

    Read the article

  • Forcing a Service to Restart at Boot

    - by pjtatlow
    So I use winbind and samba to connect to an AD domain at work, and one of my ubuntu machines has been having issues. At boot, I cannot log in as an AD user, but if I log in as the local user and do a sudo service winbind restart it works fine. Yes, winbind does start at boot, although for some reason incorrectly I think. I can't tell anything from the logs, and I'm just wondering how to force winbind to restart after it starts the first time, at boot. Thanks!

    Read the article

  • forcing upgrade or how to upgrade packages after update failed

    - by Orosjopie
    How do I continue upgrade or fix upgrade as upgrading in 13.04 to 13.10 failed. I'm currently running terminal in restore mode. I get the error unmet dependencies, but tried to install apt-get -f install, but get this error: unable to fetch some archives. using apt-get update also brings errors for example: failed to fetch http: archive.canonical.com/ubuntu/dists/quantal/release.gpg Could not resolve 'archive.canonical.com. Is it my internet connection? Its currently connected via a network cable and internet is on other computers. If internet on my ubuntu 13.10, how do I switch it on? or how can fix my upgrade problem that I can boot ubuntu normal, backup, then format and reload ubuntu 13.10 properly? I posted this problem that has also linked to this problem I mentioned: upgrading to ubuntu 13.10 failed/crashed. I did manager to install/upgrade some of the files and got ubuntu 13.10 to reboot, but not 100%, as it is slow and the unity desktop is not showing 100%. in trying some of the commands online, I get the error that the activity manager is not installed, when trying to install it, it conflicts with activity-log-manager-common. Please assist.

    Read the article

  • Microsoft Forcing Dev/Partners Hands on Win 8 Through Certification

    - by D'Arcy Lussier
    I remember 2.5 years ago when Microsoft dropped a bomb on the Microsoft Partner community: all Gold competencies would require .NET 4 based premiere certifications (MCPD). Problem was, this gave a window of about 6 months for partners to update their employees’ certifications. At the place I was working, I put together an aggressive plan and we were able to attain the certs needed. Microsoft is always open that the certification requirements will change as the industry changes. .NET 1.0 certifications are useless here in 2012, and rightfully so they’ve been retired for a long time now. But now we’re seeing a new tactic by Microsoft – shifting gears away from certifications that speak to what industry needs and more to the Windows 8 agenda. Consider that currently the premiere development certification is the Microsoft Certified Professional Developer, which comes in three flavours – Web, Windows, and Azure. All require WCF and Data Access exams, as well as one that deals with the associated base technologies (ASP.NET, WinForms/WPF, Azure), and one that ties all three together in a solution-based exam. For Microsoft-based organizations, these skills aren’t just valid but necessary in building Microsoft applications. But the MCPD is being replaced with our old friend Microsoft Certified Solutions Developer (MCSD). So far, Microsoft has only released two types of MCSD – Web and Windows Store Apps. Windows Store Apps?! In a push to move developers to create WinRT-based applications, desktop development is now considered a second-class citizen in the eyes of Redmond. Also interesting are the language options for the exams: HTML5 and C#. Sorry VB folks, its time to embrace curly braces whether they be JavaScript or C#. Consider too the skills being assessed for the Windows Store Apps: Get your MCSD: Windows Store Apps Using HTML5 Get your MCSD: Windows Store Apps Using C# *Image Source: http://www.microsoft.com/learning/en/us/certification/mcsd-windows-store-apps.aspx Nov 21/2012 If you look at the skills being tested in each exam, you’ll find that skills like WCF and Data Access are downplayed compared to things like integrating Charms, facilitating Search, programming for the microphone and camera – all very Windows 8 focussed items. Where this becomes maddening is that Microsoft is still pushing Windows 7 with enterprise clients. According to a ZDNet article, Microsoft wants to see Windows 7 on 70% of enterprise desktops by mid 2013. Assuming they somehow meet that (its a pretty lofty goal), there’s years of traditional desktop-based development that will still be required at some level. For those thinking they’ll just write and stick with the MCPD certification, note that most exams that go towards that certification will be retired at the end of July 2013! (Read the small print). And while details haven’t been finalized, its a safe bet that MCPD certifications eventually won’t count towards Gold-level competencies in the Microsoft Partner program. What this means for Microsoft Partners and Developers is that certification for desktop development is going to be limited to Windows Store Apps unless Microsoft re-introduces a traditional desktop (WPF) based MCSD cert. Web Application Development – It’s Not All Bad There’s big changes on the web side of certification, but I actually see these changes as being for the good! Check out the new exam requirements for MCSD – Web Applications: Get your MCSD: Web Applications certification *Image Source: http://www.microsoft.com/learning/en/us/certification/cert-mcsd-web-applications.aspx Nov 21, 2012 We now *start* with HTML5, JavaScript, and CSS3! Now I’m sure that these will be slanted towards web development in IE, and I can hear designers everywhere bemoaning the CSS/IE combination. Still, I applaud Microsoft for adopting HTML5 as the go-to web technology and requiring certified developers to prove they have skills in the basics of web dev. The fact that the second exam clearly states “MVC Web Applications” shows that Web Forms is truly legacy and deprecated. That’s not to say there aren’t those out there that are still supporting or (for whatever reason) doing new dev with Web Forms, but this move by Microsoft is telling the community they better get on the MVC bandwagon if they want to stay current. Fantastic! And of course Azure needs to be here as well, and this is where the Microsoft agenda fits in. It’s no secret that there’s been a huge push in getting developers on to Azure. I don’t see this as being a bad thing either, as cloud computing (whether Azure, private, or 3rd party) is a necessary skill for developers to have here in 2012. The cynic in me realizes that the HTML5/JavaScript/CSS push wouldn’t be as prominent though if not for the Windows 8 Store App play, where HTML5 is a first class citizen (and an available language for the MCSD Windows Store App cert). In this case, the desktop developers loss is the web developers gain. Get Ready for Changes In addition to the changes in certifications, the Microsoft Partner competencies are going through changes as well. Web and Software Development are being merged into a single competency, meaning that licenses you would have received from having both as Gold are reduced. Other competencies are either being removed or changed, as are the exam requirements. In the same way that we’re seeing faster release cycles from Microsoft, so too will we see the Microsoft Partner Program and MS Certifications evolve faster than ever before. Many of us got caught in the last wave of changes, but this time we can see the wave coming – and it looks pretty big!

    Read the article

  • SOA Forcing A Shift In IT Governance

    As more and more companies adopt a service oriented approach to developing and maintaining existing enterprise systems, IT governance also needs to shift its philosophies to fit the emerging development paradigm. When I first started programming companies placed an emphasis on “Code and Go” software development style. They only developed for current problems and did not really take a look at how the company could leverage some of the code we were developing across the entire enterprise system.  The concept of Service Oriented Architecture (SOA) has dramatically shifted how we develop enterprise software with emphasizing software processes as company assets. This has driven some to start developing new components as processes strictly for the possibility of future integration of existing and new systems. I personally like this new paradigm because it truly promotes code reusability. However, most enterprise level IT governance polices were created prior to the introduction of SOA in their respected organization. This can create a sense of the Wild West for developers working on projects related to SOA. This is due to the fact that a lot of the standards and polices implemented by enterprise IT governing boards were initially for developing under the “Code and Go” paradigm and do not take in to account idiosyncrasies found in the SOA/integration based development. As IT governance moves forward its focus should aim more for “Develop to Integrate” versus “Code and Go” philosophies. Examples of “Develop to Integrate” Philosophy: Defining preferred data transfer methodologies (XML vs. JSON), and when to use them Updating security best practices for exposing public services based on existing standard security policies Define when to use create new SOA project vs. implementing localized components that could be reused elsewhere in the enterprise.

    Read the article

  • Forcing Nautilus to use Kerberos (Active Directory) authentication

    - by user14146
    Is there a way to get Nautilus or any other file manager that runs on Ubuntu 11.04 to use Kerberos for authentication? I'm using Likewise Open to join machines to the domain, and I can't type in passwords for every user on every computer that needs to mount a network share. I've been able to get Kerberos working with the command line smbclient, but oddly Kerberos does not seem to be Nautilus-integrated. I also checked the SSH config file, and it looks like you can enable GSSAPIAuthentication, but it only works for Kerberos v2, not the current version, which I think is v5.

    Read the article

  • Updating password hashing without forcing a new password for existing users

    - by Willem
    You maintain an existing application with an established user base. Over time it is decided that the current password hashing technique is outdated and needs to be upgraded. Furthermore, for UX reasons, you don't want existing users to be forced to update their password. The whole password hashing update needs to happen behind the screen. Assume a 'simplistic' database model for users that contains: ID Email Password How does one go around to solving such a requirement? My current thoughts are: create a new hashing method in the appropriate class update the user table in the database to hold an additional password field Once a user successfully logs in using the outdated password hash, fill the second password field with the updated hash This leaves me with the problem that I cannot reasonable differentiate between users who have and those who have not updated their password hash and thus will be forced to check both. This seems horribly flawed. Furthermore this basically means that the old hashing technique could be forced to stay indefinitely until every single user has updated their password. Only at that moment could I start removing the old hashing check and remove the superfluous database field. I'm mainly looking for some design tips here, since my current 'solution' is dirty, incomplete and what not, but if actual code is required to describe a possible solution, feel free to use any language.

    Read the article

  • Forcing Zeitgeist to index Dropbox folder

    - by Jarmo
    I am running Ubuntu 11.10, and I would like to force Zeitgeist to index my Dropbox folder. I understand that Zeitgeist is a passive service that logs particular events (such as opening or downloading files) for later searches, but I have large Dropbox folder that was downloaded without being logged by Zeitgeist. Short of manually opening and closing all files in my Dropbox folder, is there a way to have Zeitgeist index this folder so that I can later search it using the dash? Thanks!

    Read the article

  • Firefox and Chrome keeps forcing HTTPS on Rails app using nginx/Passenger

    - by Steve
    I've got a really weird problem here where every time I try to browse my Rails app in non-SSL mode Chrome (v16) and Firefox (v7) keeps forcing my website to be served in HTTPS. My Rails application is deployed on a Ubuntu VPS using Capistrano, nginx, Passenger and a wildcard SSL certificate. I have set these parameters for port 80 in the nginx.conf: passenger_set_cgi_param HTTP_X_FORWARDED_PROTO http; passenger_set_cgi_param HTTPS off; The long version of my nginx.conf can be found here: https://gist.github.com/2eab42666c609b015bff The ssl-redirect.include file contains: rewrite ^/sign_up https://$host$request_uri? permanent ; rewrite ^/login https://$host$request_uri? permanent ; rewrite ^/settings/password https://$host$request_uri? permanent ; It is to make sure those three pages use HTTPS when coming from non-SSL request. My production.rb file contains this line: # Enable HTTP and HTTPS in parallel config.middleware.insert_before Rack::Lock, Rack::SSL, :exclude => proc { |env| env['HTTPS'] != 'on' } I have tried redirecting to HTTP via nginx rewrites, Ruby on Rails redirects and also used Rails view url using HTTP protocol. My application.rb file contains this methods used in a before_filter hook: def force_http if Rails.env.production? if request.ssl? redirect_to :protocol => 'http', :status => :moved_permanently end end end Every time I try to redirect to HTTP non-SSL the browser attempts to redirect it back to HTTPS causing an infinite redirect loop. Safari, however, works just fine. Even when I've disabled serving SSL in nginx the browsers still try to connect to the site using HTTPS. I should also mention that when I pushed my app on to Heroku, the Rails redirect work just fine for all browsers. The reason why I want to use non-SSL is that my homepage contains non-secure dynamic embedded objects and a non-secure CDN and I want to prevent security warnings. I don't know what is causing the browser to keep forcing HTTPS requests.

    Read the article

  • Forcing DFS replication on Windows 2003 Web Edition SP2

    - by Greg B
    I have a pair of web servers running Windows Server 2003 Web Edition SP2 (build 3790). There servers are a on a Gigabit LAN and are on the same subnet. I created a new link on the 1st server and added the second box as a target to the link. I configured replication as ring. The 2nd server gets some of the files on the scheduled replication but this varies wildly which makes me think it's not running fully. The folder on server 1 is 1.3GB. Is there some way I can force repication (from a console?) and monitor the progress and see if/when/why it fails.

    Read the article

  • Forcing specific resolution on a specific monitor on Mac OS X

    - by ufk
    I have Snow Leopard (Mac OS X 10.6) installed with 2 monitors. Main: a 24 inch Dell monitor that Mac OS X detects and displays on 1920x1200 Secondary: a 19 inch Chimei monitor that supports resolution 1440x900 but Mac OS X detects it as 1344x1008. How can I force a 1400x900 resolution on my secondary monitor?

    Read the article

  • Forcing the from address when postfix relays over smtp

    - by John Whitlock
    I'm trying to get email reports from our AWS EC2 instances. We're using Exchange Online (part of Microsoft Online Services). I've setup a user account specifically for SMTP relaying, and I've setup Postfix to meet all the requirements to relay messages through this server. However, Exchange Online's SMTP server will reject messages unless the From address exactly matches the authentication address (the error message is 550 5.7.1 Client does not have permissions to send as this sender). With careful configuration, I can setup my services to send as this user. But I'm not a huge fan of being careful - I'd rather have postfix force the issue. Is there a way to do this?

    Read the article

  • Forcing logon to Air Watch server upon joining wifi

    - by DKNUCKLES
    I'm setting up a wireless controller that I would like to leave as unsecured. When a user connects to this network they need to be forwarded to a specific page where they can authenticate with the Air Watch system they have in place. Once authentication takes place, a profile will be downloaded to their device and we can administer the devices accordingly. I'm mulling over how I can force the page to the user when they log in. The methodology I'm thinking about working with is creating a NAT rule for that VSC that would forward all port 80 and 443 traffic to the airwatch server. Once they authenticate, a profile will be downloaded which will connect the devices to an Virtual Access Point who's SSID isn't broadcasted. Is this methodology correct or can someone think of an easier / more efficient way of accomplishing this? The controller is an HP MSM720 for what it's worth.

    Read the article

  • My EliteBook is not auto picking 1080p for external monitor, poor display on forcing

    - by Griever
    I'm connecting my Samsung LED S22A300B to my HP EliteBook 6930p through VGA out. Laptop has Intel 4500MHD video card. I have latest drivers installed for both card and monitor. Only 800x600 and 1024x768 are shown. A lot of other people get this problem when they use it with docking station as discussed here. But I am not using any docking station. The monitor works great with my desktop though. As advised on the aforementioned page, one of the things I tried was to force the resolution using Intel's "custom resolution" feature. I installed PowerStrip on my desktop and copied advanced timing values(front/back porch,sync width, etc.) from there and then used the same values while defining a custom resolution in my laptop's Intel graphics utility. As a result, I got the 1080p resolution but the display is poor. Text has some weird colored shadow and sometimes on images too. What should I do?

    Read the article

  • Forcing smtp outgoing mail encryption on postfix

    - by Simon
    Hi all, anyone knows how to tell postfix to encrypt outgoing mail? I have configured it to use encryption on reception, but I´m unnable to do it with the out mail. This is my main.cf file: smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_path = smtpd transport_maps = hash:/etc/postfix/transport # tls config smtp_use_tls = yes smtpd_use_tls = yes smtp_tls_note_starttls_offer = yes smtpd_tls_key_file = /etc/postfix/ssl/smtpd.pem smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.pem smtpd_tls_CAfile = /etc/postfix/ssl/smtpd.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom Thanks in advance!

    Read the article

  • Forcing smtp outgoing mail encryption on postfix

    - by Simon
    Hi all, anyone knows how to tell postfix to encrypt outgoing mail? I have configured it to use encryption on reception, but I´m unnable to do it with the out mail. This is my main.cf file: smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_path = smtpd transport_maps = hash:/etc/postfix/transport # tls config smtp_use_tls = yes smtpd_use_tls = yes smtp_tls_note_starttls_offer = yes smtpd_tls_key_file = /etc/postfix/ssl/smtpd.pem smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.pem smtpd_tls_CAfile = /etc/postfix/ssl/smtpd.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom Thanks in advance!

    Read the article

  • Turn off forcing www.

    - by Kemo
    I have a 'clean' CentOS system with webmin, running Apache2 and BindDNS. When I try accessing the domain without www, I get instantly redirected to www.domain.name, as the Firebug Net console screenshot below displays http://pokit.etf.ba/upload/pokit141661fa46b11782745bb974d5140004.png What I need to know is; which are the most often reasons for this? If you need any more info, cfg or log files, please let me know

    Read the article

  • Forcing a particular SSL protocol for an nginx proxying server

    - by vitch
    I am developing an application against a remote https web service. While developing I need to proxy requests from my local development server (running nginx on ubuntu) to the remote https web server. Here is the relevant nginx config: server { server_name project.dev; listen 443; ssl on; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; location / { proxy_pass https://remote.server.com; proxy_set_header Host remote.server.com; proxy_redirect off; } } The problem is that the remote HTTPS server can only accept connections over SSLv3 as can be seen from the following openssl calls. Not working: $ openssl s_client -connect remote.server.com:443 CONNECTED(00000003) 139849073899168:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 226 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- Working: $ openssl s_client -connect remote.server.com:443 -ssl3 CONNECTED(00000003) <snip> --- SSL handshake has read 1562 bytes and written 359 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 1024 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 Cipher : RC4-SHA <snip> With the current setup my nginx proxy gives a 502 Bad Gateway when I connect to it in a browser. Enabling debug in the error log I can see the message: [info] 1451#0: *16 peer closed connection in SSL handshake while SSL handshaking to upstream. I tried adding ssl_protocols SSLv3; to the nginx configuration but that didn't help. Does anyone know how I can set this up to work correctly?

    Read the article

  • Forcing Remote machine to serve RDP (or similar) session

    - by sMaN
    Due to a Dell/Nvidia design flaw in the Dell Inspiron 1420 series, my laptop no longer shows a display. I am looking for a solution to view it remotely. I have used it via RDP in the past (not for a year though) however for what ever reason I can no longer RDP, it could have been disabled some how. However, I'm on the same LAN, I can ping it, and know its login creditials. Is there a way I can hack into it remotely to force it to serve a RDP session or an alternative? Please bear in mind my only view I could have to its interface is via a remote session. Its running Windows 7 Pro

    Read the article

  • Forcing authentication for local domains

    - by Taron Sargsyan
    Today I mentioned strange issue on my ispconfig 3 installation. After some debugging it was clear that anyone can send emails to local domains without authentication. I checked main.cf and sow that smtp_senders_restriction pointing to mail_access table in ispconfig database. The issue is that table is empty and I'm not sure how to add record there through ispconfig interface. Any thoughts?? Thanks in advance.

    Read the article

  • forcing itunes tags to the file itself

    - by user21458
    How do I make itunes save tagging information to the file itself? I tagged a file and then loaded it into a different itunes library on a different computer via a NFS share. The tagging info wasn't present which leads me to believe the tagging info is only stored in the itunes DB. Update I'm specifically concerned about movies files, so if you RClick - Get info Options - Media Kind (Movies/TV/etc) Video - Show/Episode/Season These tags don't seem to be saved to the file itself, this is lame.

    Read the article

  • Forcing IE8 Standards Mode with FEATURE_BROWSER_EMULATION

    - by earls
    I'm doing this: http://blogs.msdn.com/b/ie/archive/2009/03/10/more-ie8-extensibility-improvements.aspx But it's not working. I have "iexplore.exe" set to 8888 (decimal mode) under MACHINE, but it's still coming up documentMode = 5. I thought 8888 was suppose to force IE8 Standards Mode whether you have a doctype or not. What is going on?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >