Search Results

Search found 19299 results on 772 pages for 'off topic'.

Page 414/772 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • Developing momentum on open source projects

    - by sashang
    Hi I've been struggling to develop momentum contributing to open source projects. I have in the past tried with gcc and contributed a fix to libstdc++ but it was a once off and even though I spent months in my spare time on the dev mailing list and reading through things I just never seemed to develop any momentum with the code. Eventually I unsubscribed and got my free time back and uncluttered my mailbox. Like a lot of people I have some little open source defunct projects lying around on the net, but they're not large and I'm the only contributor. At the moment I'm more interested in contributing to a large open source project and want to know how people got started because I find it difficult while working full time to develop any momentum with the code base. Other more regular contributors, who are on the project full-time, are able to make changes at will and as result enter that positive feedback cycle where they understand the code and also know where it's heading. It makes the barrier to entry higher for those that come along later. My questions are to people who actively contribute to large opensource projects, like the Linux kernel, or gcc or clang/llvm or anything else with say a developer head count of more than 10. How did you get started? Was there a large chunk of time in your life that you just could dedicate to working on the project? I know in Linus's case he had a chunk of time (6 months) to get it started. What barriers to entry did you encounter? Can you describe the initial stages of the time spent with the project, from when you had little understanding of the code to when you understood enough to commit regularly. Thanks

    Read the article

  • Shooting Print Quality Pictures with a Camera Phone [Video]

    - by Jason Fitzpatrick
    Camera phones get a lot of grief for being underpowered but in this video tutorial the crew at SLR Lounge shows how basic technique and a good eye overcome all. Last year over at the photo blog FStoppers they put together a video showing off how you could use the iPhone as a fashion camera–essentially arguing that the camera wasn’t as important as the photographer. A lot of people said “Well yeah, but you had professional models and thousands of dollars in lighting equipment!” in reaction to the video. In turn the crew at SLR Lounge decided to make their own video showing that using only an iPhone camera and two reflectors (as well as an attractive but informal model). It of course helps to have some side kicks to help hold up your reflectors but the point still stands about modern camera phones being perfectly capable of good photos. The SLR Lounge iPhone Photo Shoot – A Follow Up Tribute to The FStoppers [YouTube] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • Simplest way to render image over top of another with another image used as mask in OpenGL?

    - by Adam Naylor
    The effect I'm looking for is to have a single large background image that is always visible (at full alpha) and then show a second image (what I call a light map or specular map) that is partially shown over the top based on a third image (which is effectively a mask). The effect is similar to this effect except instead of simply darkening or lightening the background image using the third image it needs to mask the second without effecting the first at all. The third image is the only one that moves therefore hard baking the third images alpha into the second image isn't an option. If my explanation isn't clear I'll provide visual examples when I have more time. I'd prefer not to go down a shader route as I haven't taught myself this area yet so unless I have too I'd rather try to achieve this with simple alpha blending. Happy to use a shader approach. Cheers. Additional These third images are obviously light sources being cast onto the first image showing the specular information from the second image to simulate the light 'shining' off the objects in the first image. The solution I implement will need to allow two light sources to potentially overlap so my current thoughts are that the alpha values of the two images will need to be combined (Added?) to produce a final image which masks the second image? Don't worry about things like coloured lights. For this technique the lights are all considered white.

    Read the article

  • Windows Azure Horror Story: The Phantom App that Ate my Data

    - by jasont
    Originally posted on: http://geekswithblogs.net/jasont/archive/2013/10/31/windows-azure-horror-story-the-phantom-app-that-ate-my.aspxI’ve posted in the past about how my company, Stackify, makes some great tools that enhance the Windows Azure experience. By just using the Azure Management Portal, you don’t get a lot of insight into what is happening inside your instances, which are just Windows VMs. This morning, I noticed something quite scary, and fittingly enough, it’s Halloween. I deleted a deployment after doing a new deployment and VIP swap. But, after a few minutes, the instance still appeared in my Stackify dashboard. I thought this odd, as we monitor these operations and remove devices that have been deleted. Digging in showed that it wasn’t a problem with Stackify. This instance was still there, and still sending data. You’ll notice though that the “Status” check shows that the instance no longer exists. And it gets scarier. I quickly checked the running services, and sure enough, the Windows Azure Guest Agent (which runs worker roles) was still running: “Surely, this can’t be” I’m thinking at this point. Knowing that this worker role is in our QA environment, it has debug logging turned on. And, using our log file tailing feature, I can see that, yes, indeed, this deleted deployment is still executing code. Pulling messages off queues. Sending alerts. Changing data. Fortunately, our tools give me the ability to manually disable the Windows service that runs this code. In the meantime, I have contacted everyone I know (and support) at Microsoft to address this issue. Needless to say, this is concerning. When you delete a deployment, it needs to actually delete. Not continue to eat your data. As soon as I learn more, I’ll be posting an update.

    Read the article

  • Mail.app doesn't see mail changes; requires mailbox "rebuild"

    - by vy32
    I use Mail.app on 4 different computers to access the same mailboxes. I see the same problem on all of them --- if the Mail.app is running, then it catches all of the mail updates. But if a machine is turned off, when it turns back on the mailbox doesn't synchronize properly. I might delete 20 mail messages on one system and they are still on system #2 until I "rebuild." I have seen this happen with the Mobile Me IMAP server, with Gmail, with courier IMAP, and with dovecot. Does anybody else see this? What should I do? Is there a mail server that this happens less with? It's very frustrating, as I get 200-300 mail messages a day and frequently end up handling email messages multiple times.

    Read the article

  • Securing a local server physically

    - by Daniele
    We are an online business. We have a very powerful server with hard disk mirroring in our office that we are using for a variety of internal business-critical functions. We want to keep that machine in our office but we want to make sure it is as secure as possible (within reason). Obviously we are already backing it up everyday off-site. My question is more about not-too-expensive physical measures to protect the machine against thieves and disasters such as fire. What would you suggest?

    Read the article

  • Dell inspiron not finding Vodafone router

    - by Jeggy
    I have a "Dell inspiron 1564" and ubuntu doesn't find my friends router it works great at home, he has a vodafone router jeggy@jeggy-XPS:~$ sudo lshw -C network *-network description: Wireless interface product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 logical name: eth1 version: 01 serial: 78:e4:00:2a:d1:eb width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11bg resources: irq:17 memory:f0200000-f0203fff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 02 serial: b8:ac:6f:67:32:52 size: 10Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=N/A latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:42 ioport:3000(size=256) memory:f0410000-f0410fff memory:f0400000-f040ffff memory:f0420000-f043ffff *-network description: Ethernet interface physical id: 4 logical name: ham0 serial: 7a:79:05:ff:3e:ec size: 10Mbit/s capabilities: ethernet physical configuration: autonegotiation=off broadcast=yes driver=tun driverversion=1.6 duplex=full firmware=N/A ip=5.255.62.236 link=yes multicast=yes port=twisted pair speed=10Mbit/s

    Read the article

  • Apache Redirect from https to https

    - by Nikolaos Kakouros
    I am trying to redirect without a rewrite rule from eg https://www.domain.com to https://www.domain.net . I have a wildcard certificate for *.domain.net . This yields the following warning in my error_log [warn] RSA server certificate wildcard CommonName (CN) `*.domain.net' does NOT match server name!? This makes sense and I understand why the warning. I would like to ask if there is a way to use the Redirect directive to accomplish the above without the warnings. Here is my virtual hosts in ssl.conf: <VirtualHost *:443> SSLEngine on ServerName www.domain.net DocumentRoot /var/www/html/domain SSLOptions -FakeBasicAuth -ExportCertData +StrictRequire +OptRenegotiate -StdEnvVars SSLStrictSNIVHostCheck off </VirtualHost> <VirtualHost *:443> SSLEngine on ServerName www.domain.com ServerAlias www.domain.info Redirect permanent / https://www.domain.net </VirtualHost> Also, if there is a solution, can it be used for redirection from htps://domain.com to htps://www.domain.com? Thanks a lot!

    Read the article

  • How do I change pages registering as 404 to 200

    - by christian
    I have this problem. After relaunching my site: http://www.kgstiles.com, traffic dropped immensely(about 60%). After troubleshooting for a week and a half - losing thousands of dollars off of lost traffic in the process, I found that Google was getting a 404 error at the end of many of my 301 redirects(so it wouldn't index the new pages). Most of of the pages, though, would register in my browser. They registered as a 404 error in Google's index as well as a 404checker. So my first question is: could this be what's causing my loss of traffic? and second: how do I fix it? I'm desperate! Any help is appreciated! # BEGIN s2Member GZIP exclusions <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{QUERY_STRING} (^|\?|&)s2member_file_download\=.+ RewriteRule .* - [E=no-gzip:1] </IfModule> # END s2Member GZIP exclusions # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^moreinfo/(.*)$ http://www.kgstiles.com/moreinfo$1 [R=301] RewriteRule ^healthsolutions/(.*)$ http://www.kgstiles.com/healthsolutions$1 [R=301] RewriteRule ^(.*)\.html$ $1/ [R=301,L] RewriteRule ^(.*)\.htm$ $1/ [R=301,L] </IfModule> # END WordPress

    Read the article

  • Unable to mount an LVM Hard-drive after upgrade

    - by Bruce Staples
    I imagine this is a basic gotcha ... but I can't see it. I have a system with 2(physical) harddrives. The boot system (/dev/sda) was running 10.04 & the second drive (/dev/sdb) was just a mounted filesystem. I did a clean load of Ubuntu 12.04 overwriting /dev/sda (not an upgrade) & now cannot mount the second drive. so I do not know what to enter it into the fstab ... I had expected to use: /dev/sdb /tera ext4 defaults 0 2 But even manual mounting fails (I also have tried various "-t" options on the off chance!) sudo mount -t ext4 /dev/sdb1 /tera mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Output from disk queries indicate that it is a Linux LVM & a healthy disk still. sudo lshw -C disk *-disk:0 description: ATA Disk product: WDC WD5000AACS-0 vendor: Western Digital physical id: 0 bus info: scsi@2:0.0.0 logical name: /dev/sda version: 01.0 serial: WD-WCASU1401098 size: 465GiB (500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00015a55 *-disk:1 description: ATA Disk product: WDC WD10EADS-00L vendor: Western Digital physical id: 1 bus info: scsi@3:0.0.0 logical name: /dev/sdb version: 01.0 serial: WD-WCAU47836304 size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 sudo fdisk -l Disk /dev/sda: 500.1 GB, 500106780160 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976771055 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00015a55 Device Boot Start End Blocks Id System /dev/sda1 * 2048 972580863 486289408 83 Linux /dev/sda2 972582910 976769023 2093057 5 Extended /dev/sda5 972582912 976769023 2093056 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 1953525167 976762583+ 8e Linux LVM LVM doesn't appear to be an option for mount or fstab. ... and here's a Smart data Screenshot from Disk Utility.

    Read the article

  • Every file on cPanel got deleted (then restored hours later), and I have no idea why

    - by mcranston18
    I apologize in advance if I don't provide proper detail; I am new to server stuff and am looking for general advice about this issue: I was helping out a client doing web design last month. They have about a dozen static sites on one server. The sites are all built on Joomla, except one which I built on Wordpress. Everything was working fine last month when we did the redesign but all of a sudden this morning, every single file on their server got deleted: every web page, file, and all e-mail addresses. I phoned the hosting company (alliancewww.com) to ask, "why did every single file suddenly delete off the server?" They said, "because someone must have deleted it." I said, "well no one did." (Which I'm pretty damn sure no one did.) They said, "you can pay us to look into the problem." I authorized $150 for them to look into the problem. About an hour later, everything was magically re-instated. The host said they had a back-up of everything and just restored everything. What I'm wondering: Does anyone have recommendations of logs I can go through to investigate how the files got deleted in the first place? I've checked out their cPanel logs but found nothing. Is it likely that this is a mess-up on the host's part?

    Read the article

  • cpnfigure open_basedir under Plesk

    - by cori
    This might be a question for ServerFault, and f it wasn't for the Plesk aspect I would ask it there to start with, so if it's better suited for over there let me know and I'll move it. I'm working on a dedicated server set up as a reseller account with Plesk to manage the domains and server configuration, and i need to add a directory to the local open_basedir configuration for a specific vhost. Given Plesk's normal methodology, I expected to be able to go to /var/www/vhost/{%DOMAINNAME%}/conf and modify vhost.conf and place a new value there, as I have successfully done with other configuration settings for this domain (turning safe_mode off, for instance). When I do so, however, the new setting doesn't take (per phpinfo();). If I edit httpd.conf (which the plesk configuration specifically says not to do in the notes at the top of httpd.conf) the setting takes. Is there something specific about the open_basdir setting that makes it not configurable in vhost.conf? How much trouble am I letting myself in for by editing the vhost-specific httpd.conf (I imagine is someone makes changes in the plesk web interface it might be overwritten, but what other risk is there)? Thanks!

    Read the article

  • The Bing Sting - an alternative opinion

    - by Charles Young
    I know I'm a bit of an MS fanboy at times, but please, am I missing something here? Microsoft, with permission of users, exploits clickstream data gathered by observing user behaviour. One use for this data is to improve Bing queries. Google equips twenty of its engineers with laptops and installs the widgets required to provide Microsoft with clickstream data. It then gets their engineers to repeatedly (I assume) type in 'synthetic' queries which bring back 'doctored' hits. It asks its engineers to then click these results (think about this!). So, the behaviour of the engineers is observed and the resulting clickstream data goes off to Microsoft. It is processed and 'improves' Bing results accordingly.   What exactly did Microsoft do wrong here?   Google's so-called 'Bing sting' is clearly a very effective attack from a propaganda perspective, but is poor practice from a company that claims to do no evil. Generating and sending clickstream data deliberately so that you can then subsequently claim that your competitor 'copied' that data from you is neither fair nor reasonable, and suggests to me a degree of desperation in the face of real competition.   Monopolies are undesirable, whether they are Microsoft monopolies or Google monopolies.    Personally, I'm glad Microsoft has technology in place to observe user behaviour (with permission, of course) and improve their search results using such data. I can only assume Google doesn't implement similar capabilities. Sounds to me as if, at least in this respect, Microsoft may offer the better technology.

    Read the article

  • Intel 5100 AGN disconnects and then disables my Verizon FiOS Actiontec router

    - by Anthony
    I am a new user (my first trial) of Ubuntu and booted up my Windows Vista laptop up with the Ubuntu CD. I was able to connect wirelessly to my router and stay connected for about a minute. After that it would disconnect and my router would be disabled: none of my other wireless devices would see the signal any longer, as if it disappeared. I would have to cycle the router's power toggle off/on to get it to come back on and put out a signal again. This happened repeatedly. I experienced no other problems trying the software (i.e., accessed my files w/o issue). I did not attempt to connect with an Ethernet cable. Here are the specs on my system: laptop is HP Pavilion dv5 Notebook PC system type is 64-bit operating system CPU is Intel Core 2 Duo P8600 2.4GHz ram is 4 GB network card is Intel WiFi Link 5100 AGN I read elsewhere in this forum that 64-bit systems may not be compatible with Ubuntu. Can anyone help me with this? I'd really like to be able to use this op system.

    Read the article

  • Ubuntu 12.04LTS stuck at login mouse and keyboard not responding

    - by user169954
    I have Ubuntu 12.04LTS installed on a dual-booted machine (i5 dual core 8G ram) and it's been working fine, but today for some reason when I logged out and tried to log back in it is stuck at the login page, i.e. the mouse and the keyboard are not responding. The mouse is stuck at the top left corner of my screen, and I can do nothing but to turn the machine off. (I have been using logitech wireless mouse and keyboard.) I can not access the virtual console (Alt-F1 or Cntrl-Alt-F1) either! Here is what I have tried so far: Verified it is not a H/W problem since the mouse and keyboard work fine with windows7 Booted with ubuntu installation dvd and ran trial mode and mouse and keyboard worked fine. Tried bypassing the login screen by booting into recovery mode and editing tty1.conf, but to no avail. I moved .Xauthority .profile and .bashrc from my $HOME to another location so my login would proceed completely by system defaults. But this did not help From recovery mode commandline used to dpkg-reconfigure to switch between gdm and lightgdm. This did not help either. By the way, when in recovery mode as root from command line I mount the filesystem, all apps work fine. Python is Ok, octave is ok, vi is ok etc. I have a feeling if I could only bypass the login screen, and automatically get into the desktop, I will be ok. But I haven't been able to accomplish this either. I desperately need help please. Thank you in advance. Update: So I tried to switch to lightdm by dpkg-configure lightdm. This at least brings me up to the classical linux commandline login prompt but without a gui. Should I install startx? Should I install ubuntu-desktop?

    Read the article

  • Running .NET app from network share in Win 7?

    - by mlangille
    We have a .NET 1.1 application that we keep on a netowork share. We install the .NET Framerwork to the local PCs and also set the full trust via the following: %windir%\Microsoft.NET\Framework\v1.1.4322\caspol -pp off -cg LocalIntranet_Zone FullTrust This has worked fine on all PCs to date however now we have a few new PCs with Win7 and the process no longer is working. The app will run fine from a local drive in Win7 but running the networked copy results in a general exception error. Any ideas on how to get this to work under Win7?

    Read the article

  • How can I force Firefox to select the tab to the right on tab close?

    - by Philip
    I'm using TMP 0.3.8.2 and a host of other extensions on FF 3.6.3 on OSX 10.6. I've tried disabling all other extensions that might interfere with tab settings but it's hard to keep track of everything because I have literally 60 (about 30 are disabled though) that are on and off in various configurations. I'm wondering if there's a way to FORCE Firefox to select the tab to the right on closing a tab, in such a way that it overrides any other setting anywhere else. I realize the prospects for this are dim. Thanks.

    Read the article

  • How can I make multiple displays work on my Asus UX32VD?

    - by oKtosiTe
    Original title: Why do I have two trash icons in the Unity Launcher? Whether I run Ubuntu as a live-USB or install it, I always have two trash bins on the Unity Launcher. Both work, and both open the same location. This seems a bit redundant; what could be done about it? Update: Turning auto-hide on made it obvious that I have multiple Launchers showing. With auto-hide off, they simply overlap, making it look like there's a double trash icon, but with auto-hide enabled, I can display one Launcher (and therefore one trash icon) at a time. Still, two are running simultaneously. Second update: This problem appears to be caused by the way Ubuntu handles multiple displays on my Asus UX32VD Ultrabook. Somehow, the laptop display cannot be used while my external display is connected. It is shown in the Displays list, but remains black no matter how I configure it. The external display runs at 1920x1200, the laptop monitor should run at 1920x1080. It therefore becomes obvious that the Launcher that's supposed to run on the laptop display, is actually displayed on the external monitor. Using nomodeset as a kernel parameter as indicated here makes the laptop display inaccessible altogether, detecting the external monitor as the laptop display and making resolutions other than 1920x1200 inaccessible. That is not an option.

    Read the article

  • How to implment the database for event conditions and item bonuses for a browser based game

    - by Saifis
    I am currently creating a browser based game, and was wondering what was the standard approach in making diverse conditions and status bonuses database wise. Currently considering two cases. Event Conditions Needs min 1000 gold Needs min Lv 10 Needs certain item. Needs fulfillment of another event Status Bonus Reduces damage by 20% +100 attack points Deflects certain type of attack I wish to be able to continually change these parameters during the process of production and operation, so having them hard-coded isn't the best way. All I could come up with are the following two methods. Method 1 Create a table that contains each conditions with needed attributes Have a model named conditions with all the attributes it would need to set them conditions condition_type (level, money_min, money_max item, event_aquired) condition_amount prerequisite_condition_id prerequisite_item_id Method 2 write it in a DSL form that could be interpreted later in the code Perhaps something like yaml, have a text area in the setting form and have the code interpret it. condition_foo: condition_type :level min_level: 10 condition_type :item item_id: 2 At current Method 2 looks to be more practical and flexible for future changes, trade off being that all the flex must be done on the code side. Not to sure how this is supposed to be done, is it supposed to be hard coded? separate config file? Any help would be appreciated. Added For additional info, it will be implemented with Ruby on Rails

    Read the article

  • Why, when on Kubuntu I lose internet connection, am I unable to reconnect?

    - by Jonathan
    Using Kubuntu 11.10. Sony Vaio computer. Network controller: Intel Corporation WiFi Link 5100. If I connect to a wireless network, and the signal drops, then I am unable to connect to any network without a reboot. I can assure that the issue has nothing to do with the computer going to sleep, as I have experienced the above while using my computer continuously. Here is exactly what happens: Connect to network (at University, where the connection is not so great). The connection is broken There are three other possible networks available, but none of them can be connected to. I have tried off and on sometimes for hours. I am always able to reestablish a connection after a reboot. I can only think of two explanations. The first is that a temp file is corrupted when the internet connection is abruptly dropped. The second is that my computer actually corrupts something before the loss of internet connection, which causes the loss in signal, and inability to reconnect. However, I am not confident that my explanations are complete, nor do I have any idea how to test these things.

    Read the article

  • Is realtime validation of username good or bad?

    - by iamserious
    I have a simple form for the user to sign up to my site; with email, username and password fields. We are now trying to implement an ajax validation so the user doesn't have to post the form to find out if the username is already taken. I can do this either on keyup event or on text blur event. My question is, which of these is really the best way to do? Keyup From the user POV, it would be good if the validation is done as and when they are typing, (on key up event) - of course, I am waiting for half a second to see if the user stops typing before firing off the request, and user can make any adjustments immediately. But this means I am sending way more requests than if I validated the username on Blur event. Blur The number of requests will be much lower when the validation is done on blur event, But this means the user has to actually go away from the textbox, look at the validation result, and if necessary go back to it to make any changes and repeat the whole process until he gets it right. I had a quick look at google, tumblr, twitter and no one actually does username validations on keyup events, (heck, tubmlr waits for the form to be posted) but I can swear I have seen keyup validations in a lot of places too. So, coming back to the question, will keyup validations be too many for server, is it an unnecessary overhead? or is it worth taking these hits to give user a better experience? ps: all my regex validations etc are already done on javascript and only when it passes all these other criteria does it send a request to server to check if a username already exists. (And the server is doing a select count(1) from user where username = '' - nothing substantial, but still enough to occupy some resource) pps: I'm on asp.net, MS SQL stack., if that matters.

    Read the article

  • Varnish VCL not allowing two separate IP addresses as backends

    - by Peter Griffin
    Every time I attempt to add an extra back end into our VCL file, it's fails. Here is the DAEMON_OPTS we are running off: DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/custom.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -s malloc,10G" And here is the offending backend(s) backend default { .host = "114.123.456.789"; .port = "8080"; } backend alt { .host = "203.123.456.789"; .port = "80"; } Any Ideas ? Gut feeling is it might need the backends to be set somewhere, but I'm not sure where.

    Read the article

  • Compaq Q2009 monitor display Issues

    - by jasonco
    I have this compaq q2009 monitor 20' inches and I am having display issues. When I turn on the computer the image is displayed fine for a fraction of a second and then the display goes black. If I turn off the monitor and turn it on again the the same thing happens: the image is displayed for a fraction of a second and then goes black. This is odd. I've looked all over the web and I haven't found anything useful (not even in the HP website). So there is no "out of range" or "recommended resolution" message. It just displays and then goes black. I think the resolution is fine because it was working OK until recently and I haven't changed anything in that regard. Any help would be appreciated. Thanks.

    Read the article

  • Unity Dashboard won't find local files, rearrange icons on two computers

    - by Stanton.Sculpture
    Suddenly I can't move icons around my unity launcher and the Dash won't search for my local files and folders. Was working when I first installed 13.10, but now it won't search for local files, and it won't let me rearrange the icons in any way. I've tried turning on and off all the scopes (lenses?) in multiple combinations, but it won't find any files unless I use nautilus to find them its mostly unresponsive. I can't see my recently used files, or files and folders scope at all. Dragging and dropping the icons on the side dock doesn't work, they only stick to my mouse until I put them back where they were. I cannot unlock any icons from the launcher, it just doesn't do anything when I click it. I tried rebooting both of my computers and its still won't function normally. I used ubuntu-bug -w to report a bug, no one has gotten back to me. Is there some option that I changed to cause this? This is a problem on both my laptop and Desktop. Please Help, Alex

    Read the article

  • OBIEE 11.1.1.6.5 Bundle Patch released Oct 2012

    - by user554629
    October  2012 OBIEE 11.1.1.6.5 Bundle Patch released Bundle patches are collection of controlled, well tested critical bug fixes for a specific product  which may include security contents and occasionally minor enhancements. These are cumulative in nature meaning the latest bundle patch in a particular series would include the contents of the previous bundle patches released.  A suite bundle patch is an aggregation of multiple product  bundle patches that are part of a product suite. For OBIEE on 11.1.1.6.0, we plan to run a monthly bundle patch cadence. 11.1.1.6.5 bundle patch- available for download from  My Oracle Support . - is cumulative, so it includes everything from previous updates- available for supported platforms ( Windows, Linux, Solaris, AIX, HPUX-IA ) Navigate to https://support.oracle.com and login- Knowledge Base tab  Select a product line [ Business Intelligence ]  Select a Task [ Patching and Maintenance ]  Click Search- Oct 23, 2012, OBIEE 11g: Required and Recommended Patches and Patch Sets, ID 1488475.1- 11.1.1.6.5 Published 19th October 2012 Note: The 11.1.1.6 versions on top of 11.1.1.6.0 are not upgrades, they are opatch fixes.  This is not an upgrade process like from OBIEE 10g to 11g, or from OBIEE 11.1.1.5 to 11.1.1.6.  It is much safer than applying any one-off fixes, which are not regression tested.  You will be more successful using 11.1.1.6.5.  

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >