Search Results

Search found 30451 results on 1219 pages for 'browser support'.

Page 57/1219 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Graphics card support

    - by Daryl
    Brand new user to Ubuntu and Linux. Quick question that I think already know the answer to: Does Ubuntu 11.10 have an updated driver for my graphics card? I was planning on being able to s-video out and watch videos on my tv like I could when I had Vista installed. daryl@daryl-Aspire-3050:~$ lspci -nn | grep VGA 01:05.0 VGA compatible controller [0300]: ATI Technologies Inc RS482 [Radeon Xpress 200M] [1002:5975] ATI Radeon Xpress 1100 is the actual card. As it is, Ubuntu is not recognizing when I plug an s-video cable into my laptop. I've narrowed it down to using what I think is a generic driver because all of the s-video enable commands I found and tried have failed.

    Read the article

  • X-notifier doesn't work in Chromium Browser

    - by cipricus
    It just keeps checking in vain. Also cannot import or export data, but get this error I use the latest versions of both in Lubuntu 12.04. In Google Chrome it works. What could it be the problem? Edit - following vasa1's comment - running sudo aa-status i get apparmor module is loaded. 16 profiles are loaded. 16 profiles are in enforce mode. /sbin/dhclient /usr/bin/evince /usr/bin/evince-previewer /usr/bin/evince-previewer//launchpad_integration /usr/bin/evince-previewer//sanitized_helper /usr/bin/evince-thumbnailer /usr/bin/evince-thumbnailer//sanitized_helper /usr/bin/evince//launchpad_integration /usr/bin/evince//sanitized_helper /usr/lib/NetworkManager/nm-dhcp-client.action /usr/lib/connman/scripts/dhclient-script /usr/lib/cups/backend/cups-pdf /usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper /usr/sbin/cupsd /usr/sbin/ntpd /usr/sbin/tcpdump 0 profiles are in complain mode. 3 processes have profiles defined. 3 processes are in enforce mode. /sbin/dhclient (1562) /usr/sbin/cupsd (916) /usr/sbin/ntpd (1695) 0 processes are in complain mode. 0 processes are unconfined but have a profile defined.

    Read the article

  • Modifying the SL/WIF Integration Bits to support Issued Token Credentials

    - by Your DisplayName here!
    The SL/WIF integration code that ships with the Identity Training Kit only supports Windows and UserName credentials to request tokens from an STS. This is fine for simple single STS scenarios (like a single IdP). But the more common pattern for claims/token based systems is to split the STS roles into an IdP and a Resource STS (or whatever you wanna call it). In this case, the 2nd leg requires to present the issued token from the 1st leg – this is not directly supported by the bits. But they can be easily modified to accomplish this. The Credential Fist we need a class that represents an issued token credential. Here we store the RSTR that got returned from the client to IdP request: public class IssuedTokenCredentials : IRequestCredentials {     public string IssuedToken { get; set; }     public RequestSecurityTokenResponse RSTR { get; set; }     public IssuedTokenCredentials(RequestSecurityTokenResponse rstr)     {         RSTR = rstr;         IssuedToken = rstr.RequestedSecurityToken.RawToken;     } } The Binding Next we need a binding to be used with issued token credential requests. This assumes you have an STS endpoint for mixed mode security with SecureConversation turned off. public class WSTrustBindingIssuedTokenMixed : WSTrustBinding {     public WSTrustBindingIssuedTokenMixed()     {         this.Elements.Add( new HttpsTransportBindingElement() );     } } WSTrustClient The last step is to make some modifications to WSTrustClient to make it issued token aware. In the constructor you have to check for the credential type, and if it is an issued token, store it away. private RequestSecurityTokenResponse _rstr; public WSTrustClient( Binding binding, EndpointAddress remoteAddress, IRequestCredentials credentials )     : base( binding, remoteAddress ) {     if ( null == credentials )     {         throw new ArgumentNullException( "credentials" );     }     if (credentials is UsernameCredentials)     {         UsernameCredentials usernname = credentials as UsernameCredentials;         base.ChannelFactory.Credentials.UserName.UserName = usernname.Username;         base.ChannelFactory.Credentials.UserName.Password = usernname.Password;     }     else if (credentials is IssuedTokenCredentials)     {         var issuedToken = credentials as IssuedTokenCredentials;         _rstr = issuedToken.RSTR;     }     else if (credentials is WindowsCredentials)     { }     else     {         throw new ArgumentOutOfRangeException("credentials", "type was not expected");     } } Next – when WSTrustClient constructs the RST message to the STS, the issued token header must be embedded when needed: private Message BuildRequestAsMessage( RequestSecurityToken request ) {     var message = Message.CreateMessage( base.Endpoint.Binding.MessageVersion ?? MessageVersion.Default,       IssueAction,       (BodyWriter) new WSTrustRequestBodyWriter( request ) );     if (_rstr != null)     {         message.Headers.Add(new IssuedTokenHeader(_rstr));     }     return message; } HTH

    Read the article

  • Existing laravel 4 project gives 404 in browser

    - by Richard A
    I'm trying to set up a development environment on a virtual machine running Ubuntu 14.04 LTS using Nginx and HHVM. To do this, I followed the tutorial here. This goes well with a new installation of Laravel. But when I import an existing Laravel 4 project and try to open that on my actual machine (which will serve as the client running Windows 7), I'm getting a 404 File Not Found error on the screen while connecting to http://sav.savrichard.dev. I did add this to the hosts file with the correct IP Address. The virtual machine is receiving the request and responds with a 404 error. How do I solve this error? I'm pretty new to Ubuntu so I'm not exactly sure what's wrong. The project is located at /var/www/sav.savrichard.net The server configuration is as follow: server { listen 80 default_server; root /var/www/sav.savrichard.net/public; index index.html index.htm index.php; server_name sav.savrichard.dev; access_log /var/log/nginx/localhost.sav.savrichard.dev-access.log; error_log /var/log/nginx/localhost.sav.savrichard.dev-error.log error; charset utf-8; location / { try_files \$uri \$uri/ /index.php?\$query_string; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; } error_page 404 /index.php; include hhvm.conf; # Deny .htaccess file access location ~ /\.ht { deny all; } } And the hhvm.conf file is: location ~ \.(hh|php)$ { fastcgi_keep_conn on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • How to support tableless columns with WYSIWYG editor?

    - by Andy
    On the front page of a site I'm working on there's a small slideshow. It's not for pictures in particular, any content can go in, and I'm currently setting up the editing interface for the client. I'd like to be able to have one/two/more columns in the editable area, and ideally that would be via CSS - does anyone know of a WYSIWYG editor that supports this? I'm using Drupal (would prefer not to involve Panels as it would require a bit of work to make it a streamlined workflow for content entry) in case that matters to anyone. To start the ball rolling, one way would be to use templates. I know CKEditor supports templates, and it looks like TinyMCE might have something similar. I don't know how well these work with tableless columns (the CKEditor homepage demo uses tables to achieve its two column effect). Holding out for a cool solution!

    Read the article

  • Week in Geek: Google Chrome Rises to the Top of the Browser Heap, Becomes #1

    - by Asian Angel
    Our last edition of WIG for May is filled with news link goodness covering topics such as a smartphone hijacking vulnerability affects AT&T and 47 other carriers, a possible problem with Windows 8 booting too quickly, half of PC users are pirates according to a study, and more. How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More 47 Keyboard Shortcuts That Work in All Web Browsers How To Hide Passwords in an Encrypted Drive Even the FBI Can’t Get Into

    Read the article

  • CoffeeScript - inability to support progressive adoption

    - by Renso
    First if, what is CoffeeScript?Web definitionsCoffeeScript is a programming language that compiles statement-by-statement to JavaScript. The language adds syntactic sugar inspired by Ruby and Python to enhance JavaScript's brevity and readability, as well as adding more sophisticated features like array comprehension and pattern matching.The issue with CoffeeScript is that it eliminates any progressive adoption. It is a purist approach, kind of like the Amish, if you're not borne Amish, tough luck. So for folks with thousands of lines of JavaScript code will have a tough time to convert it to CoffeeScript. You can use the js2coffee API to convert the JavaScript file to CoffeeScript but in my experience that had trouble converting the files. It would convert the file to CoffeeScript without any complaints, but then when trying to generate the CoffeeScript file got errors with guess what: INDENTATION!Tried to convince the CoffeeScript community on github but got lots of push-back to progressive adoption with comments like "stupid", "crap", "child's comportment", "it's like Ruby, Python", "legacy code" etc. As a matter of interest one of the first comments were that the code needs to be re-designed before converted to CoffeeScript. Well I rest my case then :-)So far the community on github has been very reluctant to even consider introducing some way to define code-blocks, obviously curly braces is not an option as they use it for json object definitions. They also have no consideration for a progressive adoption where some, if not all, JavaScript syntax will be allowed which means all of us in the real world that have thousands of lines of JavaScript will have a real issue converting it over. Worst, I for one lack the confidence that tools like js2coffee will provide the correct indentation that will determine the flow of control in your code!!! Actually it is hard for me to find enough justification for using spaces or tabs to control the flow of code. It is no wonder that C#, C, C++, Java, all enterprise-scale frameworks still use curly braces. Have never seen an enterprise app built with Ruby or PhP.Let me know what your concerns are with CoffeeScript and how you dealt with large scale JavaScript conversions to CoffeeScript.

    Read the article

  • SQL DB design to support user feeds (in application like facebook)

    - by Yoav
    I have a social network server with a MySql DB. I want to show the users feeds like done in Facebook. Example - UserX now Friend with userY, userX did like on postX etc. Currently I have table: C1 : UserId C2 : LogType (now friend, did like etc) C3 : ObjectId (Can be userId or postId) - set depending on the LogType. Currently to get all related logs to show to the user I do the following queries: 1. Get All user Friends userIds 2. Query all rows which C1 is in userIds (I query completed) 3. Scan the DB and see - if LogType equals DidLike, check if post's OwnerId is the userId - if yes add it to logs. And so on. Obvious this is not efficient at all. I am looking for a better way. I thought I had in mind: Create a new table (in addition to the Log table) C1 : UserId C2 : LogId (from Log table) C3 : UserID of the one who did the action When querying logs - look in the table and get related Logs (by LogId) from LogTable. Updating the table: Whenever user doing action that should be in the log: 1. Add the Log entry to LogTable. 2. Scan the DB and see which users are interested with the Log (Who my friends are, Who is the owner of the post) and add related entries to the new table. (must be done in BG). 3. If user UNFRIEND another user - then look in the logs for all rows where C3 == UNFRIENDED user id and delete them. Any opinions? Other suggestions?

    Read the article

  • Is SSD TRIM support still automatic in 12.10?

    - by dam
    Folks I had automatic TRIM working on my laptop running Ubuntu Precise. As in the TRIM guide I added discard to mount options in /etc/fstab, and hdparm --read-sector read 0s immediately after a rm && sync. Using the very same hardware, laptop and SSD, TRIM seems no longer to be automatic after upgrading to Quantal. I recognise the test in the guide I mentioned above may not necessarily work. SSD erase blocks and all that. But Quantal is at least different. After deleting the file and syncing, its data are still on disk and unerased even after waiting several minutes. fstrim will then 0 the dead file's blocks. Once. Repeat the same test five minutes later, and fstrim does nothing. I figure this is probably really a kernel issue, but that box is too black for my spelunking torch. I'm prepared to believe that kernel 3.5 knows what I want better than I do, and all is well despite appearances, but it looks for all the world like TRIM isn't quite all there any more. Anybody have the scoop on TRIM in Quantal/kernel 3.5?

    Read the article

  • XUbuntu: Open file browser via "run command" menu

    - by mbelow
    In older Xubuntu versions, if I entered a path to a directory in the "run command" dialog, a thunar-window was opened showing this directory. I found that to be very handy if I would open f.e.g "/tmp", I just needed to press WindowsKey+r, enter /tmp and press enter, there I was. This was also great for URLs (ftp, http etc) Unfortunately, since 12.04, this doesn't work anymore. It seem as if the "run command" is now integrated with the program finder. It still works when I type "thunar ftp://...." or "thunar /tmp", but it's a bit tedious now. Is there a way to restore the old behavior? Note: I'm running a german localization of Xubuntu - I hope I translated "run command" and "program finder" correctly...

    Read the article

  • Is browser and bot whitelisting a practical approach?

    - by Sn3akyP3t3
    With blacklisting it takes plenty of time to monitor events to uncover undesirable behavior and then taking corrective action. I would like to avoid that daily drudgery if possible. I'm thinking whitelisting would be the answer, but I'm unsure if that is a wise approach due to the nature of deny all, allow only a few. Eventually someone out there will be blocked unintentionally is my fear. Even so, whitelisting would also block plenty of undesired traffic to pay per use items such as the Google Custom Search API as well as preserve bandwidth and my sanity. I'm not running Apache, but the idea would be the same I'm assuming. I would essentially be depending on the User Agent identifier to determine who is allowed to visit. I've tried to take into account for accessibility because some web browsers are more geared for those with disabilities although I'm not aware of any specific ones at the moment. The need to not depend on whitelisting alone to keep the site away from harm is fully understood. Other means to protect the site still need to be in place. I intend to have a honeypot, checkbox CAPTCHA, use of OWASP ESAPI, and blacklisting previous known bad IP addresses.

    Read the article

  • Week in Geek: SkyDrive Bug Blocks Opera Browser Users from the Service

    - by Asian Angel
    Our latest edition of WIG is filled with news link coverage on topics such as how the FBI and CIA can read your e-mail, Blizzard admits to wrongfully banning a Diablo 3 Linux user and refunds his money, e-mailed malware disguised as group coupon offers are increasing, and more. Chainlink clipart courtesy of For Web Designer. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Does Your Browser Behave?

    Last June, we launched the Sputnik JavaScript conformance test suite, a comprehensive set of more than 5000 tests. Today we're releasing a test runner for Sputnik, that allows...

    Read the article

  • Week in Geek: Google Chrome Becomes #1 Browser in the World for a Day

    - by Asian Angel
    Our last edition of WIG for March is filled with news link goodness such as 22% of users are keeping the Windows 8 Explorer Ribbon expanded, Facebook is upset with prospective employers asking for peoples’ account passwords, Firefox 14 nightly has added a new HTML5-based PDF viewer, and more. How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • Hosting files with support for file tagging / keywords

    - by Zev Chonoles
    I have a large (approx. 25GB) collection of files I would like to host online for people to view or download. I have a spare computer I can use as a dedicated server for these files. I'm looking for a method of, or piece of software for, hosting my files where I can assign tags or keywords to the files, and people viewing my files online can search the collection via the tags. By way of approximate solutions I've found so far, I see that there is software such as Collectorz.com or Readerware for creating databases of one's books / music / movies, and these databases can be searched by tags or keywords, and the databases can be made available and searchable online; this would suit my purposes except that my files are not necessarily books, music, or movies, and I want the files themselves accessible online, not a database describing my files. A commercially-available solution like the ones above would be acceptable, but I'd prefer to have the whole setup under my control (i.e. I'd like to either implement it by hand, or use commercial software that doesn't rely on using the company's servers, paying them a continued fee, etc.). The current extent of my internet experience is designing a few Google Sites, so I know there's a fair chance I won't understand the answers I receive, but I'm always happy to have a summer project :)

    Read the article

  • Does Windows 8 still support DirectX 9?

    - by SullY
    Is Windows 8 supporting DirectX 9? Because I was looking through some samples written in C++ and DirectX 9 made for Windows 8. It wasn't that, like I know it ( look here http://directxtutorial.com/Lesson.aspx?lessonid=111-4-2 ). E.g. Inizialising DirectX with COM: ComPtr<ID3D11Device1> dev; ComPtr<ID3D11DeviceContext1> devcon; It's just weird because I know it with the old way: ID3D11Device *dev; ID3D11DeviceContext *devcon; ( I hope you understand what I want to tell ) I hope it hasn't change completely due the released their new OS.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >