Search Results

Search found 7249 results on 290 pages for 'https everywhere'.

Page 189/290 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • Does IE have more strict Javascript parsing than Chrome?

    - by Clay Shannon
    This is not meant to start a religio-technical browser war - I still prefer Chrome, at least for now, but: Because of a perhaps Chrome-related problem with my web page (see https://code.google.com/p/chromium/issues/detail?can=2&start=0&num=100&q=&colspec=ID%20Pri%20M%20Iteration%20ReleaseBlock%20Cr%20Status%20Owner%20Summary%20OS%20Modified&groupby=&sort=&id=161473), I temporarily switched to IE (10) to see if it would also view the time value as invalid. However, I didn't even get to that point - IE stopped me in my tracks before I could get there; but I found that IE was right - it is more particular/precise in validating my code. For example, I got this from IE: SCRIPT5007: The value of the property '$' is null or undefined, not a Function object ...which was referring to this: <script src="/CommonLogin/Scripts/jquery-1.9.1.min.js" type="text/javascript"></script> <script type="text/javascript"> // body sometimes becomes white???? with jquery 1.6.1 $("body").css("background-color", "#405DA7"); < This line is highlighted as the culprit: $("body").css("background-color", "#405DA7"); jQuery is referenced right above it - so why did it consider "$" to be undefined, especially when Chrome had no problem with it...ah! I looked at that location (/CommonLogin/Scripts/) and saw that, sure enough, the version of jQuery there was actually jquery-1.6.2.min.js. I added the updated jQuery file (1.9.1) and it got past this. So now the question is: why does Chrome ignore this? Does it download the referenced version from its own CDN if it can't find it in the place you specify? IE did flag other errs after that, too; so I'm thinking perhaps IE is better at catching lurking problems than, at least, Chrome is. Haven't tested Firefox diesbzg yet.

    Read the article

  • Why is apt-cache so slow?

    - by Damn Terminal
    After upgrade to Trusty (14.04) from Saucy (13.10), all apt operations are very slow. Even those that do not include downloading anything, or connecting to any servers. For example, displaying the apt policy # time apt-cache policy [...] real 0m8.951s user 0m5.069s sys 0m3.861s takes almost ten seconds! Mostly a weird lag right after issuing the command. And it's the same even if I issue the same command again. On another system it doesn't take a tenth of a second real 0m0.096s user 0m0.070s sys 0m0.023s The other system is a little beefier but there was no noticeable difference before the upgrade. It's the same with apt-get, anything apt-related. How do I find out the source of this lag and fix it? Additional info: # cat /etc/nsswitch.conf # /etc/nsswitch.conf # # Example configuration of GNU Name Service Switch functionality. # If you have the `glibc-doc-reference' and `info' packages installed, try: # `info libc "Name Service Switch"' for information about this file. passwd: compat group: compat shadow: compat hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis BTW is my understanding of how apt-cache works correct? It doesn't make any network connections when I run apt-cache policy, right? In case I'm wrong and it matters, here are my sources https://gist.github.com/anonymous/02920270ff68e23fc3ec

    Read the article

  • Ubuntu-Java Installation-Topcoder-“Java not found”

    - by hakuna121
    I realized that this might be the dumbest question here, but being a total beginner as I am, I really couldn't figure it out after trying all kinds of instructions I could found on the web. Specs: Ubuntu 13.04; What I intended to do: check out the Algorithm Competition section, by clicking the second-to-left tab,located on the top-left of the page: http://community.topcoder.com/tc What I got: a pop-up saying Java not found! Java could not be automatically detected on your machine. This page will attempt to automatically install Java and Java Web Start. If the download and installation does not occur automatically, click the link below to go to the Sun website where you can download the latest version of Java. What I did: I followed instructions on this Ubuntu Documentation page: https://help.ubuntu.com/community/Java and installed OpenJDK(Java Runtime Environment/Browser plugin/SDK) through Ubuntu Software Center. Then I rebooted the system, tried the page again. But I still got the java not found pop-up described above. Question: What's missing the get this working? Thank you!

    Read the article

  • OAuth2 vs Public API

    - by Adam Tannon
    My understanding of OAuth (2.0) is that its a software stack and protocol to allow 2+ web apps to share information about a single end user. User A is a member of Site B and Site C; Site B wants to fetch some data from Site C about User A, and this is where OAuth steps in. So first off, if this assessment is incorrect, please begin by clarifying this for me and correcting me! Assuming I'm on the right track, then I guess I'm not seeing the need for OAuth to begin with (!). I'm sure I'm just not seeing the "forest through the trees" here, but the way I see it, couldn't Site C just expose a public API that Site B could use to fetch the same data (sans OAuth)? If Site C required user credentials to access the data, could this public API just use HTTPS for secure transport and require username/password as a part of each API call? Again, I'm sure I'm missing something, but I'm just not understanding why I would need OAuth when a secure, public API written and exposed by Site C seems more than capable of delivering what Site B needs regarding User A. In general, I'm looking for a set of guidelines to go by when deciding to choose between using OAuth for my web apps or just writing my own web service ( exposing public API). Thanks in advance!

    Read the article

  • Acer S3 SSD and HD Deleted partitions?

    - by user207784
    I'm new to Linux and Ubuntu, and I think I have a problem. I have an Acer Aspire S3 with 20GB SSD and 320GB HD. I installed Ubuntu 12.04 64bit on it today, and when it asked about partitions, without knowing better, I erased the partitions. There is some information at https://help.ubuntu.com/community/AspireS3 about using the SSD and HD efficiently, but since I deleted the partitions, I don't know what to do. How can I recreate the partitions mentioned in the above link so that I can take advantage of having SSD and HD? I installed GParted, but I don't know what I should do now, and I don't want to screw things up further. I greatly appreciate any help that can be offered me. EDIT: I was playing with GParted, and I just realized that I can see dev/sda and dev/sdb, so perhaps I didn't do something horrible to my partitions. I am also sorry for asking such (dumb?) questions. At this point, is there a way to tell whether I have actually screwed up my partitions? Thanks, Joe

    Read the article

  • How to host an AP or a hotspot?

    - by user1048138
    I'm running Ubuntu 12.04 as a virtual machine on my Mac. Since I am unable to get the virtual machine to have full access to my WiFi card, I bought another USB WiFi card to use. This is my WiFi card. If you are unfamiliar with Virtual machine, as far as I know, since the Ubuntu has its own card now, it shouldn't matter. I have followed these guides with no luck: https://help.ubuntu.com/community/WifiDocs/WirelessAccessPoint http://www.danbishop.org/2011/12/11/using-hostapd-to-add-wireless-access-point-capabilities-to-an-ubuntu-server/ The problem is that the WiFi connection appears on all of the machines that I have in my house: 2 iPhones, Dell machine running Ubuntu and two Macbooks. However the connection times out on all of these machines. Questions: Could this be a driver issue if that same WiFi card can connect to other WiFi points and use its internet Could this be DHCP related? I would think not. It should at least get a 169.X.X.X address? No? Any solutions for me?

    Read the article

  • Ubuntu 11.10 loads from live usb fine, but boots to black screen from harddrive. Why?

    - by Estel
    A few days ago I had a hard drive failure, which was running Windows XP (32-bit) just fine. The second hard drive in my computer held a few unimportant files, so I formatted it in the Ubuntu setup and installed 11.10 without a hitch. I had been using it for about a week, but decided to install Windows 7 (64-bit) in order to utilize Networking with my home server (running Windows Server 2000). My system is 64-bit based, and thus I had no problems installing other than a basic RAM error that required me to remove my RAM down to a single stick. I played with the settings in Windows 7 for around an hour before I shut down. After reinstalling the RAM, Windows 7 would not boot. In this, I then assumed that something about my system was rejecting Win7 and I reinstalled Ubuntu. However, now Ubuntu (11.10) boots into black screen, and I've already attempted activating the grub menu with the shift key, and following steps listed here: https://wiki.ubuntu.com/X/Troubleshooting/BlankScreen but nothing seems to work. I've reinstalled twice now, with the same result each time. Now, the very odd part about this whole scenario is that the USB I installed from has no problems booting as a live USB. This puzzles me greatly, because the hard drive boots straight to black screen and the live USB loads normally. At this point, my only theory is that the boot sector of the hard disk was somehow corrupted with Win7, and that Ubuntu was unable to completely write through. I used Darik's Boot n Nuke to wipe the drive, but was met with an error, this also puzzles me because the hard disk has no promblems reading or writing. Any suggestions/comments are appreciated. If you have a theory, I will be more than happy to oblige. Additional information: Intel Core2 Duo e6400 2.13GHz nVidia GeForce 7-series (7900 GS) 4 GB DDR2 333MHz (2x 2GB) Dell XPS 410 BIOS Revision 2.5.3

    Read the article

  • Messaging with KnockoutJs

    - by Aligned
    MVVM Light has Messaging that helps keep View Models decoupled, isolated, and keep the separation of concerns, while allowing them to communicate with each other. This is a very helpful feature. One View Model can send off a message and if anyone is listening for it, they will react, otherwise nothing will happen. I now want to do the same with KnockoutJs View Models. Here are some links on how to do this: http://stackoverflow.com/questions/9892124/whats-the-best-way-of-linking-synchronising-view-models-in-knockout http://www.knockmeout.net/2012/05/using-ko-native-pubsub.html ~ this is a great article describing the ko.subscribable type. http://jsfiddle.net/rniemeyer/z7KgM/ ~ shows how to do the subscription https://github.com/rniemeyer/knockout-postbox will be used to help with the PubSub (described in the blog post above) through the Nuget package. http://jsfiddle.net/rniemeyer/mg3hj/ of knockout-postbox   Implementation: Use syncWith for two-way synchronization. MainVM: self.selectedElement= ko.observable().syncWith (“selectedElement”); ElementListComponentVM example: self.selectedElement= ko.observable().syncWith(“selectedElement”); ko.selectedElement.subscribe(function(){ // do something with the seletion change }); ElementVMTwo: self.selectedElement= ko.observable().syncWith (“selectedElement”); // subscribe example ko.postbox.subscribe(“changeMessage”, function(newValue){ }); // or use subscribeTo this.visible = ko.observable().subscribeTo("section", function(newValue) { // do something here }); · Use ko.toJS to avoid both sides having the same reference (see the blog post). · unsubscribeFrom should be called when the dialog is hidden or closed · Use publishOn to automatically send out messages when an observable changes o ko.observable().publishOn(“section”);

    Read the article

  • Good links somehow being converted to ones with a PHP redirect (not a virus)

    - by Rebecca
    This has happened to links we put on web pages and in emails. We might put www.oursite.org/work/ but when I view source it shows up as webmail.ourhosting.ca/hwebmail/services/go.php?url=https%3A%2F%2Fwww.oursite.org%2F%2work%2F This ends up at the webmail login page for our web host. But only some of the people who click the link get the login page; others go directly to the original page we intended. We don't want it to go to the webmail login page, nobody needs to log in to our web site. This occurs for links to pages on our site, but also to links to other sites that we put in emails or in posts. It seems to be browser independent as well as e-mail client independent as we variously have used Firefox and Chrome as well as MS Outlook and Thunderbird. I've tried to resolve the issue with our webhost but they keep telling me they don't support our browser, or our email client (i.e., they don't understand the issue). At the moment, our only option is to try another web host just to get rid of their login. Any ideas about what's going on?

    Read the article

  • Ubuntu One - Files API - Cloud - More detailed info somehwere?

    - by Brian McCavour
    I am just starting on a mobile app for Ubuntu One, and I'm reviewing the info at https://one.ubuntu.com/developer/files/store_files/cloud I find the information a bit lacking though. It's a nice reference, but for someone not familiar with it, I had to goggle search to find out what a "volume" was exactly (its kind of obvious, but never hurts to know the specifics) There's also things like: GET /api/file_storage/v1/volumes Return a JSON list of Volume Representations, one for each volume. A volume is a synced folder, or the Ubuntu One folder, owned by the user. Note that all volume paths begin with ~.: ... but there's no such thing as a JSON "list". Does it mean array ? And other things... So I was wondering if here existed another page with more detailed information. Maybe some sample request / responses or something? I could just write a little proof of concept app to answer some of these questions... but I prefer not to unless I have to. Thanks

    Read the article

  • Use the Latest Guided Learning Paths for BI & EPM Partners

    - by Mike.Hallett(at)Oracle-BI&EPM
    Keep up to date with the current version Guided Learning Paths for BI Partners @ https://competencycenter.oracle.com/opncc/glp_list.cc, for Example: Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} Business Intelligence Implementation Consultant Business Intelligence Applications 7.9.6 for CRM Implementation Specialist Business Intelligence Applications 7.9.6 for ERP Implementation Specialist Oracle Business Intelligence Foundation 11g Implementation Specialist Oracle Essbase 11 Implementation Specialist PreSales Consultant Business Intelligence Applications 7.9.6 PreSales Specialist Oracle Business Intelligence Foundation Suite 11g PreSales Specialist Oracle Essbase 11 PreSales Specialist Sales Person Business Intelligence Applications 7.9.6 Sales Specialist Oracle Business Intelligence Foundation Suite 11g Sales Specialist Oracle Essbase 11 Sales Specialist

    Read the article

  • Why won't my graphics work in Ubuntu 12.04 LTS?

    - by user170974
    I'm very new to Ubuntu and to Linux in general, and took the leap and formatted my PC to Ubuntu 12.04 LTS very recently :) I seem to be having some trouble getting my graphics card to run properly, I looked over what information I could find but I still cannot get it up and running and figured this was a good place to ask for help. The information I can find on my graphics is as follows: (Terminal command) lspci outputs: 01:05.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RS880M [Mobility Radeon HD 4225/4250] 02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Madison [Mobility Radeon HD 5650/5750 / 6530M/6550M] I tried using a mixture of the following links: How do I fix my installation of ATI Catalyst Video Driver in 12.04 LTS? What is the correct way to install ATI Catalyst Video Drivers (fglrx)? Ubuntu Precise Installation Guide But it does not seem to work, since running fglrxinfo in terminal gives: display: :0.0 screen: 0 OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x301) OpenGL version string: 1.4 (2.1 Mesa 9.0.3) What am I doing wrong here? All help appreciated ;) Edit: I have tried the legacy driver from www2.ati.com/drivers/linux/amd-driver-installer-catalyst-13-4-linux-x86.x86_64.zip I also tried the guide at https://launchpad.net/~makson96/+archive/fglrx which caused the system to crash (blackscreen, no boot) Neither seemed to work. I did however reinstall ubuntu 12.04 LTS, and re-tried both with no success. Reintalling ubuntu did however fix the broken dependencies problems, etc.

    Read the article

  • Access Control Service: Programmatically Accessing Identity Provider Information and Redirect URLs

    - by Your DisplayName here!
    In my last post I showed you that different redirect URLs trigger different response behaviors in ACS. Where did I actually get these URLs from? The answer is simple – I asked ACS ;) ACS publishes a JSON encoded feed that contains information about all registered identity providers, their display names, logos and URLs. With that information you can easily write a discovery client which, at the very heart, does this: public void GetAsync(string protocol) {     var url = string.Format( "https://{0}.{1}/v2/metadata/IdentityProviders.js?protocol={2}&realm={3}&version=1.0",         AcsNamespace,         "accesscontrol.windows.net",         protocol,         Realm);     _client.DownloadStringAsync(new Uri(url)); } The protocol can be one of these two values: wsfederation or javascriptnotify. Based on that value, the returned JSON will contain the URLs for either the redirect or notify method. Now with the help of some JSON serializer you can turn that information into CLR objects and display them in some sort of selection dialog. The next post will have a demo and source code.

    Read the article

  • JSR Updates and EC Meeting Tuesday @ 15:00 PST

    - by Heather VanCura
    JSR 310, Date and Time API, has moved to JCP 2.9 (first JCP 2.9 JSR!) JSR 236, Concurrency Utilities for Java EE, has published an Early Draft Review. This review ends 15 December 2012.  Tomorrow, Tuesday 20 November is the last Public EC Meeting of 2012, and the first EC meeting with the merged EC. The second hour of this meeting will be open to the public at 3:00 PM PST. The agenda includes  JSR 355,  EC merge implementation report, JSR 358 (JCP.next.3) status report, JCP 2.8 status update and community audit program.  Details are below. We hope you will join us, but if you cannot attend, not to worry--the recording and materials will also be public on the JCP.org multimedia page. Meeting details Date & Time Tuesday November 20, 2012, 3:00 - 4:00 pm PST Location Teleconference Dial-in +1 (866) 682-4770 (US) Conference code: 627-9803 Security code: 52732 ("JCPEC" on your phone handset) For global access numbers see http://www.intercall.com/oracle/access_numbers.htm Or +1 (408) 774-4073 WebEx Browse for the meeting from https://jcp.webex.com No registration required (enter your name and email address) Password: JCPEC Agenda JSR 355 (the EC merge) implementation report JSR 358 (JCP.next.3) status report 2.8 status update and community audit program Discussion/Q&A Note The call will be recorded and the recording published on jcp.org, so those who are unable to join in real-time will still be able to participate.

    Read the article

  • Migrating My GWB Blog Over To WordPress

    - by deadlydog
    Geeks With Blogs has been good to me, but there are too many tempting things about Word Press for me to stay with GWB.  Particularly their statistics features, and also I like that I can apply specific tags to my posts to make them easier to find in Google (and I never was able to get categories working on GWB). The one thing I don’t like about WordPress is I can’t seem to find a theme that stretches the Content area to take up the rest of the space on wide resolutions…..hopefully I’ll be able to overcome that obstacle soon though. For those who are curious as to how I actually moved all of my posts across, I ended up just using Live Writer to open my GWB posts, and then just changed it to publish to my WordPress blog.  This was fairly painless, but with my 27 posts it took probably about 2 hours of manual effort to do.  Most of the time WordPress was automatically able to copy the images over, but sometimes I had to save the pics from my GWB posts and then re-add them in Live Writer to my WordPress post.  I found another developers automated solution (in alpha mode), but opted to do it manually since I wanted to manually specify Categories and Tags on each post anyways.  The one thing I still have left to do is move the worthwhile comments across from the GWB posts to the new WordPress posts. The largest pain point was that with GWB I was using the Code Snippet Plugin for Live Writer for my source code snippets, and when they got transferred over to WordPress they looked horrible.  So I ended up finding a new Live Writer plugin called SyntaxHighlighter that looks even nicer on my posts If anybody knows of a nice WordPress theme that stretches the content area to fit the width of the screen, please let me know.  All of the themes I’ve found seem to have a max width set on them, so I end up with much wasted space on the left and right sides.  Since I post lots of code snippets, the more horizontal space the better! So if you want to continue to follow me, my blog's new home is now at https://deadlydog.wordpress.com

    Read the article

  • Is there any way to optimize my search blob program?

    - by Vicky
    I written this code to search the blob items (text files) on the basis of there content. For ex : if I search for "Good", then the files that contains "Good or good" word the name of that files should appear in search result. My code is working but i want to optimize it. class BlobSearch { public static int num = 1; static void Main(string[] args) { string accountName = "accountName"; string accessKey = "accesskey"; string azureConString = "DefaultEndpointsProtocol=https;AccountName=" + accountName + ";AccountKey=" + accessKey; string blob = "MyBlobContainer"; string searchText = string.Empty; Console.WriteLine("Type and enter to search : "); searchText = Console.ReadLine(); CloudStorageAccount account = CloudStorageAccount.Parse(azureConString); CloudBlobClient blobClient = account.CreateCloudBlobClient(); CloudBlobContainer blobContainer = blobClient.GetContainerReference(blob); blobContainer.FetchAttributes(); var blobItemList = blobContainer.ListBlobs(); GetBlobList(searchText, blobContainer, blobItemList); Console.ReadLine(); } private static async void GetBlobList(string searchText, CloudBlobContainer blobContainer, IEnumerable<IListBlobItem> blobItemList) { foreach (var item in blobItemList) { string line = string.Empty; CloudBlockBlob blockBlob = blobContainer.GetBlockBlobReference(item.Uri.ToString()); if (blockBlob.Name.Contains(".txt")) { await Search(searchText, blockBlob); } } } private async static Task Search(string searchText, CloudBlockBlob blockBlob) { string text = await blockBlob.DownloadTextAsync(); if (text.ToLower().IndexOf(searchText.ToLower()) != -1) { Console.WriteLine("Result : " + num + " => " + blockBlob.Name.Substring(blockBlob.Name.LastIndexOf('/') + 1)); num++; } } } I think blobContainer.ListBlobs(); is blocking code because search will not work until all the blob items loaded. Is there anyway to optimize it or anywhere else in my code. Thanks

    Read the article

  • Trouble Installing Codecs libx264 AND libmp3lame

    - by user10762
    I am trying to use OpenShot for movie making. My ideal situation is to encode for the web HD, 30 Frames Per Second, to YouTube. This works out great as OpenShot has a setting for this. The problem is when i go to use it I get: "The following formats/codecs are missing from your system: libx264 libmp3lame You will not be able to use the selected export profile. You will need to install the missing formats/codecs or choose a different export profile." I have tried using Synaptic Package Manager and I think I am installing the right "plugins/codecs" yet to no avail or success. I used the instructions found here to remedy: https://answers.launchpad.net/openshot/+faq/1040 I know this impacts OTHER video editing software (in other words it is not specific just to OpenShot). They give the same type of error message when i try to use their presets as well. So in a word "Help!"... Any info is much appreciated!!! PS - If you know of a BETTER way to do this (another software) that is appreciated too! :-D

    Read the article

  • A Web Service to collect data from local servers every hour

    - by anilerduran
    I'm trying to find a way to collect data from different servers around the world. Here are the details: There is only one single PowerShell script on servers that encrypts data (simple csv file) and sends with preferred method (HTTP/HTTPS Post could be) There is no more control on that servers. Can't install any service, process etc. Just I can configure script to execute every hour. This script also will have encrypted username/password/license key for every server. Script will compress data and send to me with these information. So I need a service (I'm not sure if Web Service is the rigth solution) on the cloud that will help me to: Will get data that is sent from servers using a method. Will authenticate request to recognize sender using license key/username/password and most importantly, Will redirect/send this filecab to my SQL Server on the cloud (Azure). Also it should seperate data according to customer information in license key. So every data for every customer will be stored in dedicated DB/Tables on my SQL All the processes above should be completed automatically. No way for manual steps. Question: A Web Service (SOAP or Restful) is the rigth solution for that?

    Read the article

  • How to disable discrete GPU using NVIDIA drivers?

    - by penzoiders
    I have a DELL studio XPS 13 (aka 1340) as of 12.04 most things run smoothly out of the box, but I have some power draining and warmness issues (if not to be called terrible heat issues) The system came with a NVIDIA GeForce 9500M (which has Hybrid SLI) and it shows up in "lspci" as these 2 cards 02:00.0 VGA compatible controller: NVIDIA Corporation G98 [GeForce 9200M GS] (rev a1) 03:00.0 VGA compatible controller: NVIDIA Corporation C79 [GeForce 9400M G] (rev b1) I had to install nvidia-current over noveau driver 'cause noveau does freeze the system after suspension. By installing nvidia-current and running nvidia-xconfig the resume process after suspension is fixed. By the way both with nvidia-current and noveau the system drains a lot of battery and heats up a lot. I suppose this is because the discrete GPU is always on. I don't really need 3D graphics on this system, if not the minimal to run unity and compiz for window management. So my question is: How do I disable, using nvidia-current, the discrete GPU 9200M and use only the integrated one 9400M? notes: In BIOS I have no option to disable discrete GPU This I think is not applicable because of the suspension-freeze issue (with noveau): https://help.ubuntu.com/community/HybridGraphics I've found this but I don't know which --sli option I should choose to fit my needs: http://manpages.ubuntu.com/manpages/hardy/man1/nvidia-xconfig.1.html My system has not optimus or cuda, but anyone can tell me if bumblebee can work for me?

    Read the article

  • Dependency Inversion Principle

    - by Chris Paine
    I have been studying also S.O.L.I.D. and watched this video: https://www.youtube.com/watch?v=huEEkx5P5Hs 01:45:30 into the video he talks about the Dependency Inversion Principle and I am scratching my head??? I had to simplify it(if possible) to get it through this thick scull of mine and here is what I came up with. Code on the marked My_modified_code my version, code marked Original DIP video version. Can I accomplish the same with the latter code? Thanks in advance. Original: namespace simple.main { class main { static void Main() { FirstClass FirstClass = new FirstClass(new OtherClass()); FirstClass.Method(); Console.ReadKey(); //tempClass temp = new OtherClass(); //temp.Method(); } } public class FirstClass { private tempClass _LastClass; public FirstClass(tempClass tempClass)//ctor { _LastClass = tempClass; } public void Method() { _LastClass.Method(); } } public abstract class tempClass{public abstract void Method();} public class LASTCLASS : tempClass { public override void Method() { Console.WriteLine("\nHello World!"); } } public class OtherClass : tempClass { public override void Method() { Console.WriteLine("\nOther World!"); } } } My_modified_code: namespace simple.main { class main { static void Main() { //FirstClass FirstClass = new FirstClass(new OtherClass()); //FirstClass.Method(); //Console.ReadKey(); tempClass temp = new OtherClass(); temp.Method(); } } //public class FirstClass //{ // private tempClass _LastClass; // public FirstClass(tempClass tempClass)//ctor // { // _LastClass = tempClass; // } // public void Method() // { // _LastClass.Method(); // } //} public abstract class tempClass{public abstract void Method();} public class LASTCLASS : tempClass { public override void Method() { Console.WriteLine("\nHello World!"); } } public class OtherClass : tempClass { public override void Method() { Console.WriteLine("\nOther World!"); } }

    Read the article

  • How to create a bootable system with a squashfs root

    - by cldfzn
    My goal is to be able to take a customized root file system loaded with the software I want. So far I've created a squashed filesystem using debootstrap and chroot to install the software I want on the system. The problem I am now running in to.. whenever I boot in to the system, my user accounts that were set up in the chroot do not work. First boot everything works out, second boot I can't log in. That is baffling to me. Any one know a reason or a place to start looking? Update To get a working system with a squashfs filesystem: sudo apt-get install live-boot live-boot-initramfs-tools extlinux sudo update-initramfs -u Create a squashfs file from a bootstrapped or running ubuntu filesystem with whatever packages you want available. https://help.ubuntu.com/community/LiveCDCustomizationFromScratch provides good instructions for creating a debootstrapped system to build on. Format the target drive with ext2/3/4 and enable the bootable flag. Create the folder layout on the target drive and install extlinux: mkdir -p ${TARGET}/boot/extlinux ${TARGET}/live extlinux -i ${TARGET}/boot/extlinux dd if=/usr/lib/syslinux/mbr.bin of=/dev/sdX #X is the drive letter cp /boot/vmlinuz-$(uname -r) ${TARGET}/boot/vmlinuz cp /boot/initrd.img-$(uname -r) ${TARGET}/boot/initrd cp filesystem.squashfs ${TARGET}/live Create ${TARGET}/boot/extlinux/extlinux.conf with the following contents: DEFAULT Live LABEL Live KERNEL /boot/vmlinuz APPEND initrd=/boot/initrd boot=live toram=filesystem.squashfs TIMEOUT 10 PROMPT 0 Now you should be able to boot from the target drive in to your squashed system.

    Read the article

  • Visits-PageViews-Bounce Rate-New Visitors-Visit Duration (Google Analytics), which one is top priority for seo?

    - by HOY
    This is the case: My site is getting a lot of trafic from an image (a company logo image) because this image is ranked 1.st in google search results for a company's title. (I have no idea how that happened) This image is must for my website, but it is not relevant with site content so irrelevant people search for the image and finds out about my site, so that I get interesting statistics: http://postimage.org/image/3oyvrjoz9/ Pros: Total Visits & Avg. New Visits Cons: Avg. Page/Visit, Avg. Visit Duration, Bounce Rate In summary I am confused if this image is helpful to my website ? Because I don't know the balance between those 5 statistics P.S: My website is 2 months old, and we are working on seo at the moment Another P.S: Kindly ask you to not provide assumtions, because I also have assumptions, I need real knowledge. Edit: Search Keyword is: arcelik logo Search Site: google.com.tr Search URL: https://www.google.com.tr/search?hl=en&q=arcelik+logo&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.41524429,d.Yms&biw=1366&bih=667&um=1&ie=UTF-8&tbm=isch&source=og&sa=N&tab=wi&ei=oZIDUfutAseVswa9zYHwCw

    Read the article

  • How much information can you mine out of a name?

    - by Finglas Fjorn
    While not directly related to programming, I figured that the programmers on here would be just as curious as I was about this question. Feel free to close the question if it does not meet with the guidelines. A name: first, possibly a middle, and surname. I'm curious about how much information you can mine out of a name, using publicly available datasets. I know that you can get the following with anywhere between a low-high probability (depending on the input) using US census data: 1) Gender. 2) Race. Facebook for instance, used exactly that to find out, with a decent level of accuracy, the racial distribution of users of their site (https://www.facebook.com/note.php?note_id=205925658858). What else can be mined? I'm not looking for anything specific, this is a very open-ended question to assuage my curiousity. My examples are US specific, so we'll assume that the name is the name of someone located in the US; but, if someone knows of publicly available datasets for other countries, I'm more than open to them too. I hope this is an interesting question!

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • loss of sound in ubuntu 12.04

    - by Leo Simon
    I'm running Linux E6520 3.2.0-56-generic #86-Ubuntu SMP Wed Oct 23 09:20:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux on a Dell Latitude E6530. (This is a new machine; have run the same version of linux on an older machine for a year, without this happening.) I've been losing sound regularly, though have not been able to isolate the trigger for this. I've scoured the web on this subject, in particular https://help.ubuntu.com/community/SoundTroubleshootingProcedure and Audio stopped working suddenly in 12.04 Nothing from the first site seemed to work for me. From the second site, I learned enough to be able to fix the problem when it happens, but nothing on the web has helped me figure out why the problem is happening in the first place. Patching together stuff from the web, and with some blind luck, I've found that the following steps seem to restore sound pulseaudio --kill pulseaudio --start pavucontrol -> output devices Click on the "Mute audio" icon, which mutes audio Click on the "Mute audio" icon, which unmutes audio. This obviously doesn't make sense: audio wasn't muted in the first place, but somehow, magically, toggling mute audio off and on seems to reset something. Can anybody suggest from this information why sound would be disappearing in the first place (it seems as though something is getting muted at the system level, but I don't know what)? a simpler (command-line/script) way of restoring sound, in particular, is it possible to reset pavucontrol from the commandline? Some other pieces of information that may be of use: The problem is clearly happening at the system level, since I've set up a clean new user, and this user has the same problems that I do. So user fixes like deleting the .pulse directory aren't (and don't) help. Sound works fine in Windows (dual-boot) so it's not a hardware problem Any help/suggestions on this would be most appreciated.

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >