Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 411/906 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • I can't enable extra effects in Ubuntu 10.10. Please help?

    - by jasoncruz98
    I installed Ubuntu 10.10 alongside Ubuntu 11.10 to use an older version of Compiz. On Ubuntu 11.10, Compiz was enabled by default and I didn't need to use any graphics driver to enjoy the effects. All I had to do was install CompizConfig Settings Manager and enable those extra effects. That was Compiz 0.9.6. Now, after installing Ubuntu 10.10, when I first logged in, the graphics were slow. When I dragged a window from one end of the screen to the other, the whole screen would blur up and pixelate and it would be very laggy. I tried going to System Preference Appearance and selecting Extra effects on the Visual effects tab, but all I got was "Desktop effects could not be enabled". I don't know whether I should install the Additional drivers (proprietary) because my Internet is slow and it would take a long time. Furthermore, in Ubuntu 11.10, after I installed the proprietary graphics driver, I immediately went into fallback mode and wasn't even offered an option to set my desktop session to Ubuntu 3D. I didn't need the driver to run Compiz in Ubuntu 11.10, it just ran so smoothly. But in Ubuntu 10.10, everything is so laggy. Should I install the ATI/AMD Proprietary FGLRX Graphics Driver for Ubuntu 10.10 to enable extra effects? Or is there something else wrong with my system? Here is the output of lspci -nn | grep VGA 00:02.0 VGA compatible controller [0300]: Intel Corporation Sandy Bridge Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc Device [1002:6760] Here is the output of the same command, but in Ubuntu 11.10 (in this case the one which is correct, because I don't have the Sandy Bridge Integrated graphics controller) 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc NI Seymour [AMD Radeon HD 6470M] [1002:6760]

    Read the article

  • SSL client auth in nginx with multiple server section

    - by Bastien974
    I want to implement ssl_verify_client in nginx. This works perfectly when I only have one server section, which listen to 443. In my case I have multiple, all listening on 443 but to different server_name. For one particular server (proxy.mydomain.com), I'm adding the SSL client verify, but when I test the connectivity with openssl s_client -connect proxy.mydomain.com:443 -cert xxx.crt -key xxx.key and then do a GET / HTTP/1.1 host: proxy.mydomain.com It's not working, 400 No required SSL certificate was sent I think nginx is not receiving the proper server_name and is directing it to the first server listening to 443. So I tried to listen on another port and it worked right away. What's the issue and how can I fix it ?

    Read the article

  • Google is not treating two Australian schools as separate sites when both are subdomains of qld.edu.au

    - by LuckySpoon
    My question relates to two websites, each of which is a "Calvary Christian College", however in two totally different locations and unrelated to each other entirely (except by name, and thus domain). All schools in the state are issued a <school-name>.qld.edu.au subdomain, in this case calvary.qld.edu.au and calvarycc.qld.edu.au. Now what's interesting is that these domains are crossing each other in sitelinks for searches such as calvary christian college townsville. The green data here is for one school (the Townsville school, as per search term), and the red data is for the other school. I've put a demotion in for this 6 months ago (we control calvary.qld.edu.au), however we're seeing no change on the results page. I have been able to get the owners of calvarycc.qld.edu.au to submit demotions for our domain, which should go in sometime in the next few days. What can we do to tell Google that these websites are not interchangeable, despite both appearing as "subdomains" of qld.edu.au? We can possibly open channels of communication with the administrators of qld.edu.au but will need to tell them what we need to change, and at this point I'm out of ideas.

    Read the article

  • How to decide how backward-compatible my new Mac OS X application should be?

    - by haimg
    I'm currently contemplating writing an OS X version of my Windows software. My Windows application still supports Windows XP, and I know that if I drop support for it now, our customers will cry bloody murder. I'm new to OS X development, and as I learn the technology, APIs, etc., I realized that if I'm going to provide comparable level of backward compatibility (e.g. down to OS X 10.5), I would not be able to use many things that look very useful and relevant in my case (ARC, XPC communications, many others). This is quite different from Windows, in my opinion, where there are very little changed between Windows XP and Windows 7 from desktop application developer's standpoint. So, on one hand, it seems like a complete waste to stick to 2007 or 2009-level API in 2012. On the other hand, according to NetMarketShare report and Stat Owl report Mac OS X 10.5 and 10.6 market share is still 11% and 35%-40% respectively. However, I'm not sure if these older OS users are my target audience (buyers of software utilities) if they didn't bother to upgrade their OS... My question: Are there any other reasons I should take into account when deciding if I target 10.5 or 10.6 or 10.7 for a new application?

    Read the article

  • FCoE, on any Ethernet switch?

    - by javano
    I understand the concept of FCoE. I have looked at the Wikipedia page and looking at the layer 2 frame diagram, it looks like FCoE really should "just work" on any Ethernet switch, but is this really the case? If so, what do switches like Cisco's Nexus 5k or 6120P offer that normal switches don't (in specific relation to FCoE)? I am just using those two switches as examples. On the Nexus 5548UP page for example it says the following; Unified ports that support traditional Ethernet, Fibre Channel (FC),and Fibre Channel over Ethernet (FCoE) Well if FCoE runnins over regular Ethernet, that why does it support "Ethernet and Fibire Channel over Ethernet"? This is why I am curious as to weather FCoE will run on any Ethernet switch and these switches just support "bonus" features, or if you do indeed require a specialist switch. Thank you.

    Read the article

  • Encrypt shared files on AD Domain.

    - by Walter
    Can I encrypt shared files on windows server and allow only authenticated domain users have access to these files? The scenario as follows: I have a software development company, and I would like to protect my source code from being copied by my programmers. One problem is that some programmers use their own laptops to developing the company's software. In this scenario it's impossible to prevent developers from copying the source code for their laptops. In this case I thought about the following solution, but i don't know if it's possible to implement. The idea is to encrypt the source code and they are accessible (decrypted) only when developers are logged into the AD domain, ie if they are not logged into the AD domain, the source code would be encrypted be useless. How can be implemented this using EFS?

    Read the article

  • How to force Multiple Monitors correct resolutions for LightDM?

    - by Hanynowsky
    I am affected by the BUG: https://bugs.launchpad.net/ubuntu/+source/unity-greeter/+bug/874241 Otherwise, if like me you have a laptop connected to a second monitor of higher resolution, LIGHTDM at the login stage, mirrors the displays in both screens and assign to them a common resolution (1024X768) in my case, instead of extending the desktop (Primary screen with the greeter and secondary with just a logo as mentioned in the Multiple Monitors UX specifications book for 12.04). Here is my xrandr -q @L502X:~$ xrandr -q Screen 0: minimum 320 x 200, current 1920 x 1848, maximum 8192 x 8192 LVDS1 connected 1366x768+309+1080 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 510mm x 287mm 1920x1080 60.0*+ 1600x1200 60.0 1680x1050 60.0 1280x1024 60.0 1440x900 59.9 1280x960 60.0 1280x800 59.8 1024x768 60.0 800x600 60.3 56.2 640x480 60.0 DP1 disconnected (normal left inverted right x axis y axis) I tried to force lightdm to execute some xrandr commands in order to set the right resolution for each monitor and extend the desktop, but I get a LOW GRAPHICS MODE ERROR (You're running in low graphics mode, your screen, input devices...did not get detected..) I created a simple script named lightdmxrand.sh: #!/bin/sh xrandr --output HDMI1 --primary --mode 1920x1080 --output LVDS1 --mode 1366x768 --below HDMI1 And told lightdm to run it : /etc/lightdm/lightdm.conf [SeatDefaults] greeter-session=unity-greeter user-session=ubuntu greeter-setup-script=/usr/bin/numlockx on display-setup-script=/home/hanynowsky/lightdmxrandr.sh Someone knows what is wrong!? Thanks in advance.

    Read the article

  • Protect apache pages by URL

    - by Thomas
    Is it possible to allow access to specific URLs only to certain networks? Basically, I would like to restrict access to the admin area only to the local network This area's pages are prefixed by /admin Essentially, I would like all /admin/* to be forbidden from public access. Can apache handle such a case? Thanks UPDATE Using your suggestions I came up to <LocationMatch admin> Order allow,deny deny from all Allow From 192.168.11.0/255.255.255.0 </LocationMatch> However, I get 403 even though I am on the network. Additionally, if I put apache behind haproxy, is this going to work? Because the traffic will be coming from 127.0.0.1 to apache

    Read the article

  • bridge to share WiFi internet connection over eth0

    - by rubo77
    In this script there is implemented that you create a mesh network and connect other machines to your device to serve an internet connection with dhcp: echo "starting bridge to share internet connection over eth0" ifconfig eth0 up promisc brctl addbr br-freifunk brctl addif br-freifunk bat0 brctl addif br-freifunk eth0 echo "internet starting, this may take some minutes due to latency..." echo "(use" echo "tail -f /var/log/syslog" echo "in another window for debugging)" echo echo dhclient br-freifunk echo "The error 'Rather than invoking init scripts through /etc/init.d...' can be ignored:" dhclient br-freifunk # without bridge: dhclient bat0 This works fine with the freifunk-mesh network. But how can I serve internet in a normal case? I would like to connect to any open Wifi I find with my network-manager or wicd (yes, with the graphical GUI) and then start a small script that creates a bridge to my network-out on my laptop I use Ubuntu and my networkcards are eth0 and wlan0

    Read the article

  • How to code Time Stop or Bullet Time in a game?

    - by David Miler
    I am developing a single-player RPG platformer in XNA 4.0. I would like to add an ability that would make the time "stop" or slow down, and have only the player character move at the original speed(similar to the Time Stop spell from the Baldur's Gate series). I am not looking for an exact implementation, rather some general ideas and design-patterns. EDIT: Thanks all for the great input. I have come up with the following solution public void Update(GameTime gameTime) { GameTime newGameTime = new GameTime(gameTime.TotalGameTime, new TimeSpan(gameTime.ElapsedGameTime.Ticks / DESIRED_TIME_MODIFIER)); gameTime = newGameTime; or something along these lines. This way I can set a different time for the player component and different for the rest. It certainly is not universal enough to work for a game where warping time like this would be a central element, but I hope it should work for this case. I kinda dislike the fact that it litters the main Update loop, but it certainly is the easiest way to implement it. I guess that is essentialy the same as tesselode suggested, so I'm going to give him the green tick :)

    Read the article

  • BizTalk 2009 - How do I do t"HAT"?

    - by StuartBrierley
    In my previous life working with BizTalk Server 2004, I came to view HAT (the Health and Activity Tracking tool) as one of my first ports of call in the case of problems with any of our BizTalk solutions.  When you move to BizTalk Server 2009 it is quickly apparent that HAT is no longer with us. HAT was useful in BizTalk 2004 mainly as it provided developers and administrators with a number of useful queries and views of what was going on inside BizTalk at runtime; when and what type of messages were received and sent, what messages had been suspended, what orchestration were running or suspended, you could even follow the process flow of a message or orchestration to see what was going on. With BizTalk Server 2009 much of the functionality of HAT can now be found in the BizTalk Administration console.  Select a BizTalk Group and you will be shown the Group Hub Overview page.  This provides a number of default queries that replicate some of those found in the old HAT. You can also use the Group Hub page to create new queries.  These can then be saved and loaded in other Group Hub instances - useful for creating queries in development for later use in Test, Psuedo-Live and Live environments. In the next few posts I am going to look at some of the common queries that we might miss from HAT and recreate them (or something close) using the new query option. Messages - last 100 received Messages - last 100 sent Messages - last 50 suspended Service instances - last 100 I have yet to try the updated Admin-HAT-Console in anger, and after using old-HAT for so long it may take some getting uesd to, but so far I would say that moving the HAT functionality into the BizTalk Administration console was probably the correct way to go.  Having one tool as the place to look for the combined functionality on offer certainly seems to be the sensible option.

    Read the article

  • Set ReturnPath globally in Postfix

    - by Gaia
    I have Magento using Sendmail and Wordpress using PHPmailer to send webapp-generated mail. Occasionally, someone will enter their email address incorrectly and the mail (let's say, a purchase receipt) will bounce back to the return-path specified by the script. I dont want to set the return path for each vhost, especially because it is not easily done. Ideally, WP would use the address of the blog admin and Magento would use one of the numerous email fields specified, but they default to using username@machinename (in my case, username is the system user and machinename is a FQDN, but it is not the same as the actual vhost FQDN). The result is that bounced mail returns to the server and, since the server is used only for outbound SMTP, the messages sit there, undelivered and worse, unread. I'm Postfix 2.6.6 on CentOS 6.3, is it possible to globally force a specific returnpath for all messages sent via PHP on the server?

    Read the article

  • Converting NTFS to ZFS (or other)

    - by NumberFour
    Are there any benefits of converting HDDs that are running NTFS on a Linux machine to ZFS? Is there a way to do such conversion in Linux without losing the data? What about the stability of ZFS on Linux, does FUSE really work well in this case? People say that the only way to get the real full ZFS support is to install Solaris. I understand that the best choice for Linux would be ext4, but I really havent found a way how to convert to ext4 from NTFS without sacrificing all the data. On the other hand I have doubts whether changing from NTFS to ZFS while using Linux is really wise. Thanks for any tips.

    Read the article

  • latex page number position

    - by anton
    I am working on a long document in latex with documentclass book. I need the page number to always be in the upper right corner of each page, even if that page is the first page of a chapter (right now on 1st pages of chatpers, the page number is bottom-centered, on all other pages it's top-right). I control the position of the page number with fancyheader: \usepackage{fancyhdr} \pagestyle{fancy} \lhead{} \chead{} \rhead{\thepage} \lfoot{} \cfoot{} \rfoot{} \renewcommand{\headrulewidth}{0pt} Also, I don;t know if the problem is realted but my chapters do not start from the top of the page. There is a white area, then comes chapter X, then a newline with the chapter line. What I also want is the chapter to start from the top of the page. The main question here is how I can get the page number to always appear in the upper right corner, I mention the thing with the chapter title position only in case that might be related. Thanks.

    Read the article

  • Visual Studio 2012 first impressions...no Macros!

    - by bconlon
    Yesterday I installed Microsoft Visual Studio 2012 for the first time (all 8.5GB) and after 20 years of (mostly) happy times using VS they have removed Macros, one of the most handy features.The first thing I wanted to do when I upgraded my VS2010 project was to add a #elseif block to each file. This would usually be simple case of find in files of the previous #elseif and then Ctrl+Shift+R to record a macro which would be: F8 (to select the next file from find list), F3 (to find the correct position in file), Ctrl+V to paste the new code. Then all I would need to do is keep Ctrl+Shift+P (Play Macro) pressed until all the files were processed.But alas Ctrl+Shift+R does nothing! I won’t say that I use Macros every day but it was a very useful feature.To continue my moaning a little more, I also don't like the bland interface. This has been well documented by others, but now I have used it myself, I find it difficult to tell one grey area of screen from another and the lack of colour makes the icons unclear.I also don't see why the menus now need to SHOUT in capital letters?On the plus side, they have now added the ability to see WPF properties in the debugger...a bit of an oversight in Visual Studio 2010. Oh, but you still can't edit and continue on files that contain templated code.Whilst Visual Studio 2012 is not a complete disaster like Windows 8 (why develop a desk top OS to be the same as a Smart device OS), it does not float my boat.Rant over.#

    Read the article

  • Low-res emacs24 icon in application switcher 12.10

    - by MTS
    I recently upgraded to Quantal, and also switched up to emacs24 from 23. Everything is great, except for one thing: the icon in the Application Switcher for emacs24 is a horrible, low-resolution eyesore. Compare the two side-by-side: I've seen a couple of questions addressing issues like this, but they're not quite the same. This one says that it is happening with all icons, but that's clearly not the case here. And this one seems more relevant, but it is talking about Gnome, not Unity. In the comments to the one answer for the second question, it says to look at the icons in /usr/share/icons to see if they are low-resolution, and if so to replace them with better ones. There's a ton of emacs icons, in fact. They are in various subfolders of /usr/share/icons/hicolor and they are in sizes ranging from 16x16 to 128x128, and also there are scaleable .svg versions of the icons too. I noticed that there are no 192x192 or 256x256 versions. But it seems like that shouldn't matter, since emacs23 also didn't have icons in those sizes. Any help would be much appreciated!

    Read the article

  • Doing a mysql dump causes swapping issues

    - by DFischer
    I do a mysqldump manually every night. I just noticed that after it is done and I try to access the website it is very slow. After I take a look at the free -mh I notice that the server is now swapping when it otherwise wasn't before the mysqldump. What am I to do in this case? Just restart the server every time I backup? That doesn't seem very effective. My database file raw is 1.1gb after the dump.

    Read the article

  • Preferred lambda syntax?

    - by Roger Alsing
    I'm playing around a bit with my own C like DSL grammar and would like some oppinions. I've reserved the use of "(...)" for invocations. eg: foo(1,2); My grammar supports "trailing closures" , pretty much like Ruby's blocks that can be passed as the last argument of an invocation. Currently my grammar support trailing closures like this: foo(1,2) { //parameterless closure passed as the last argument to foo } or foo(1,2) [x] { //closure with one argument (x) passed as the last argument to foo print (x); } The reason why I use [args] instead of (args) is that (args) is ambigious: foo(1,2) (x) { } There is no way in this case to tell if foo expects 3 arguments (int,int,closure(x)) or if foo expects 2 arguments and returns a closure with one argument(int,int) - closure(x) So thats pretty much the reason why I use [] as for now. I could change this to something like: foo(1,2) : (x) { } or foo(1,2) (x) -> { } So the actual question is, what do you think looks best? [...] is somewhat wrist unfriendly. let x = [a,b] { } Ideas?

    Read the article

  • HTTP Caching Server that supports POST

    - by Jeroen
    I am hosting a REST service which is sending appropriate cache-control headers. I use Varnish as a caching server in front of my webserver. However, a limitation of varnish is that it doesn't support caching HTTP POST and HTTP PUT. Is there any alternate caching server that will be able to cache these requests? I understand that caching POST is a bit tricky because you cannot just cache based on the url as a key like for GET; it needs to actually inspect the request body. In case of multipart/form-data requests, there should probably be a limit on the size of the request body for it to be cached (so that big file uploads, etc won't be cached). Nevertheless I really want to be able to cache short HTTP POST, or at least the application/x-www-form-urlencoded ones.

    Read the article

  • Suggestions on providing HA access to an external (fibre) RAID subsystem

    - by user145198
    We are looking at upgrading our storage capacity with an external RAID subsystem that has redundant (2) fibre controllers, each controller has 4 x 8 Gbps fibre ports. I would like to make access to this storage system occur via HA Linux. Ideally I would connect 2 fibre ports from each controller into each Linux server, and then export either NFS or iSCSI via a 10 Gbe interface. I have seen plenty of references to DRBD, however all of those references tend to use block storage that is solely attached to each machine, rather than having a shared block storage device, so I am unsure if DRBD could (or should) be used in this case. Ideas?

    Read the article

  • Chrome mouse lag

    - by Mike Pateras
    I recently upgraded to Windows 7 Ultimate (without reinstalling, an actual upgrade) and now I'm experiencing some really annoying mouse lag in Chrome. It seems to happen most often when there is a Flash element on the page, though that's not always the case. I have a Windows Experience base of 7.5 (I just re-ran it), 7.7 in desktop and gaming graphics, and Skyrim still runs fine. I ran an Intel chipset update and re-installed my Catalyst drivers (I uninstalled them first), and I don't have any available Windows Update. I've also re-installed both Chrome and Flash, and I've disabled all of my Chrome extensions. Nothing has helped. What might be causing this very annoying issue, and what might I do to fix it, short of reformatting? Thank you.

    Read the article

  • How to convince a client to switch to a framework *now*; also examples of great, large-scale php applications.

    - by cbrandolino
    Hi everybody. I'm about to start working on a very ambitious project that, in my opinion, has some great potential for what concerns the basic concept and the implementation ideas (implementation as in how this ideas will be implemented, not as in programming). The state of the code right now is unluckily subpar. It's vanilla php, no framework, no separation between application and visualization logic. It's been done mostly by amateur students (I know great amateur/student programmers, don't get me wrong: this was not the case though). The clients are really great, and they know the system won't scale and needs a redesign. The problem is, they would like to launch a beta ASAP and then think of rebuilding. Since just the basic functionalities are present now, I suggested it would be a great idea if we (we're a three-people shop, all very proficient) ported that code to some framework (we like CodeIgniter) before launching. We would reasonably be able to do that in < 10 days. Problem is, they don't think php would be a valid long-term solution anyway, so they would prefer to just let it be and fix the bugs for now (there's quite a bit) and then directly switch to some ruby/python based system. Porting to CI now will make future improvements incredibly easier, the current code more secure, changing the style - still being discussed with the designers - a breeze (reminder: there are database calls in template files right now); the biggest obstacle is the lack of trust in php as a valid, scalable technology. So well, I need some examples of great php applications (apart from facebook) and some suggestions on how to try to convince them to port soon. Again, they're great people - it's not like they would like ruby cause it's so hot right now; they just don't trust php since us cool programmers like bashing it, I suppose, but I'm sure going on like this for even one more day would be a mistake. Also, we have some weight in the decision process.

    Read the article

  • Understanding Application binary interface (ABI)

    - by Tim
    I am trying to understand the concept of Application binary interface (ABI). From The Linux Kernel Primer: An ABI is a set of conventions that allows a linker to combine separately compiled modules into one unit without recompilation, such as calling conventions, machine interface, and operating-system interface. Among other things, an ABI defines the binary interface between these units. ... The benefits of conforming to an ABI are that it allows linking object files compiled by different compilers. From Wikipedia: an application binary interface (ABI) describes the low-level interface between an application (or any type of) program and the operating system or another application. ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; and in the case of a complete operating system ABI, the binary format of object files, program libraries and so on. I was wondering whether ABI depends on both the instruction set and the OS. Are the two all that ABI depends on? What kinds of role does ABI play in different stages of compilation: preprocessing, conversion of code from C to Assembly, conversion of code from Assembly to Machine code, and linking? From the first quote above, it seems to me that ABI is needed for only linking stage, not the other stages. Is it correct? When is ABI needed to be considered? Is ABI needed to be considered during programming in C, Assembly or other languages? If yes, how are ABI and API different? Or is it only for linker or compiler? Is ABI specified for/in machine code, Assembly language, and/or of C?

    Read the article

  • Is my website running on an iPhone?

    - by Stefano Borini
    Provocative question, but in any case, this is what it would appear... unless there's some other reason, of course. In my cystat wordpress log, I obtained the following entry IP Browser OS Date Method Type URL 127.0.0.1 Safari 419.3 iPhone July 30, 2009 7:39 pm GET BLOG /blog/ The IP address is the IP of the visiting client. It's clear that this is not possible. Why do I get 127.0.0.1 as IP? cystat bug? some weird network trick I am not aware of? or is my website really running on an iPhone, and the guy at the applestore is reading my blog ?

    Read the article

  • What and all the areas of Linux a PHP developer should know about? (Like just commands of it or something advanced)

    - by droidsites
    I've developed a website using PHP but I implemented it on Windows OS and hosted it on Windows server. I just searched the PHP job market to know the on-going technology requirement and to keep my knowledge up-to-date accordingly with the job market. I see more are asking for LAMP stack. I understand the sort of skills required for a developer in PHP and MySQL. But coming to the Linux and Apache what kind of the skills exactly companies expect from a developer? On what should I be focusing in case of Linux, Apache whilst developing my website using these LAMP stack? I am going to develop a new website and want it to be using LAMP. But I want to know what difference it makes? Why LAMP stack got more demand in the job market compared to WAMP ? Edit: Sorry I thought my question is creating confusion ... so I put my question in different words as What and all the areas of a Linux a PHP developer should know about? (Like just commands of it or something advanced) Note: I am Linux newbie

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >