Search Results

Search found 26969 results on 1079 pages for 'prevent default'.

Page 259/1079 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • Where does the YouTube video files stored on system nowadays?

    - by souravc
    When I open a youtube video with firefox I could not find any video file inside ~/.mozilla/firefox/<some_string>.default/Cache/. I also tried with google-chrome like, ps ax | grep flash ls -l /proc/[*PID*]/fd | grep Flash But again /proc/[*PID*]/fd does not have any video like file. ls -l /proc/[*PID*]/fd have some results like lrwx------ 1 root root 64 Nov 9 12:18 22 -> /run/shm/.com.google.Chrome.eOsHNu (deleted) lrwx------ 1 root root 64 Nov 9 12:18 23 -> /run/shm/.com.google.Chrome.p8h6BL (deleted) Result of ls -l /proc/[*PID*]/fd | grep Flash for some videos from other site is like lrwx------ 1 root root 64 Nov 9 12:35 26 -> /home/username/.config/google-chrome/Default/Pepper Data/Shockwave Flash/.com.google.Chrome.QMzxP8 (deleted) But could not be copied. Then what are the places where firefox or google-chrome stores the streaming videos? And is it possible to recover(copy) a video from there to watch them offline? P.S I have other ways(downloaders) to have streaming videos, but my question is very specific.

    Read the article

  • Active Directory: trouble adding new DC

    - by ethrbunny
    I have a domain with 3 DCs. One is starting to fail so I brought up a new one. All are running Win 2003. Problem: there appear to be replication issues between the 4 machines but I can't figure out what's causing this. All are registered with the DNS as identically as I can make them. How do I know there is a problem? Nagios is telling me that the other 3 DCs are having KCCEvent errors and the new machine is reporting "failed connectivity" errors. Doing dcdiag on the new machine reports: the host could not be resolved to an IP address. This seems crazy as I log into it using the DNS name. I can ping it from the other three machines using this DNS name as well. repadmin /showreps from the new machine says its seeing the other 3 machines. Doing the same from one of the older machines doesn't show the new machine. I've tried netdiag /repair numerous times. No luck. There are no firewalls running on any of the machines. If I look at Domain info via MMC (on the new machine) it appears that all the information is current. Users, computers, DCs.. its all there. Im puzzled as to what step(s) I've missed in adding this new machine. Suggestions? EDIT: dcdiag from non-working: C:\Documents and Settings\Administrator.BME>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\YELLOW Starting test: Connectivity The host 312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc Although the Guid DNS name (312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu) couldn't be resolved, the server name (yellow.server.edu) resolved to the IP address (10.127.24.79) and was pingable. Check that the IP address is registered correctly with the DNS server. ......................... YELLOW failed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\YELLOW Skipping all tests, because server YELLOW is not responding to directory service requests Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck dcdiag from working: P:\>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\AD1 Starting test: Connectivity ......................... AD1 passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\AD1 Starting test: Replications ......................... AD1 passed test Replications Starting test: NCSecDesc ......................... AD1 passed test NCSecDesc Starting test: NetLogons ......................... AD1 passed test NetLogons Starting test: Advertising ......................... AD1 passed test Advertising Starting test: KnowsOfRoleHolders ......................... AD1 passed test KnowsOfRoleHolders Starting test: RidManager ......................... AD1 passed test RidManager Starting test: MachineAccount ......................... AD1 passed test MachineAccount Starting test: Services ......................... AD1 passed test Services Starting test: ObjectsReplicated ......................... AD1 passed test ObjectsReplicated Starting test: frssysvol ......................... AD1 passed test frssysvol Starting test: frsevent ......................... AD1 passed test frsevent Starting test: kccevent ......................... AD1 passed test kccevent Starting test: systemlog ......................... AD1 passed test systemlog Starting test: VerifyReferences ......................... AD1 passed test VerifyReferences Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck P:\>

    Read the article

  • How do I fix the HDMI/DVI display output with Intel HD 4000 Graphics in 12.04?

    - by YumYumYum
    I have an Alienware Dell PC with Intel HD 4000 Graphics (Ivy Bridge) as verified by the output of lspci | grep VGA posted below. 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) The PC only has HDMI and DVI display outputs and using the HDMI output I am only being offered abnormal resolutions. As you can see below it does not even list HDMI1 or DVI1 but just only a fallback. $ export DISPLAY=:0.0 && xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1360 x 768, maximum 1360 x 768 default connected 1360x768+0+0 0mm x 0mm 1360x768 0.0* 1024x768 0.0 800x600 0.0 640x480 0.0 How can I fix this? Does it just need to be configured differently or will I need to use a newer kernel (as Intel Graphics drivers are included in the kernel)? Follow up: kernel to latest Step 1: Go to: http://kernel.ubuntu.com/~kernel-ppa/mainline/ Go to last: http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/ Download: http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-headers-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-headers-3.6.0-030600rc3_3.6.0-030600rc3.201208221735_all.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-image-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-image-extra-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb Step 2: sudo dpkg -i linux*.deb Step 3: reboot which shows that i have Ubuntu 12.04 with latest $ uname -a Linux sun-Alienware-X51 3.6.0-030600rc3-generic #201208221735 SMP Wed Aug 22 21:36:32 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux But still same problem remain.

    Read the article

  • Why is it good to have website content files on a separate drive other than system (OS) drive?

    - by Jeffrey
    I am wondering what benefits will give me to move all website content files from the default inetpub directory (C:) to something like D:\wwwroot. By default IIS creates separate application pool for each website and I am using the built-in user and group (IURS) as the authentication method. I’ve made sure each site directory has the appropriate permission settings so I am not sure what benefits I will gain. Some of the environment settings are as below: VMWare Windows 2008 R2 64 IIS 7.5 C:\inetpub\site1 C:\inetpub\site2 Also as this article (moving the iis7 inetpub directory to a different drive) points out, not sure if it's worth the trouble to migrate files to a different drive: PLEASE BE AWARE OF THE FOLLOWING: WINDOWS SERVICING EVENTS (I.E. HOTFIXES AND SERVICE PACKS) WOULD STILL REPLACE FILES IN THE ORIGINAL DIRECTORIES. THE LIKELIHOOD THAT FILES IN THE INETPUB DIRECTORIES HAVE TO BE REPLACED BY SERVICING IS LOW BUT FOR THIS REASON DELETING THE ORIGINAL DIRECTORIES IS NOT POSSIBLE.

    Read the article

  • NHibernate Pitfalls: Cascades

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. For entities that have associations – one-to-one, one-to-many, many-to-one or many-to-many –, NHibernate needs to know what to do with their related entities, in three particular moments: when saving, updating or deleting. In particular, there are two possible behaviors: either ignore these related entities or cascade changes to them. NHibernate allows setting the cascade behavior for each association, and the default behavior is not to cascade (ignore). The possible cascade options are: None Ignore, this is the default Save-Update If the entity is being saved or updated, also save any related entities that are either not saved or have been modified and associate these related entities to the root entity. Generally safe Delete If the entity is being deleted, also delete the related entities. This is only useful for parent-child relations Delete-Orphan Identical to Delete, with the addition that if once related entity is removed from the association – orphaned –, also delete it. Also only for parent-child All Combination of Save-Update and Delete, usually that’s what we want (for parent-child relations, of course) All-Delete-Orphan Same as All plus delete any related entities who lose their relationship In summary, Save-Update is generally what you want in most cases. As for the Delete variations, they should only be used if the related entities depend on the root entity (parent-child), so that deleting the root entity and not their related entities would result in a constraint violation on the database.

    Read the article

  • Switching DVI socket

    - by lurscher
    I have Ubuntu 10.10 x86_64 with Nvidia 9800 gt and Nvidia driver version 270.41.06 My video card has two DVI sockets, but I only use the single monitor configuration. Now, I think the main DVI socket might be busted, so I want to try to enable the other as the main one, however, I don't know how to achieve that. I tried just plugging the monitor in that socket but it won't auto-detect it (it would have been way too easy to just work). This is my xorg.conf: Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: builtin, VertRefresh source: builtin Identifier "Monitor0" VendorName "Unknown" ModelName "AOC" HorizSync 31.5 - 84.7 VertRefresh 60.0 - 78.0 ModeLine "1080p" 172.8 1920 2040 2248 2576 1080 1081 1084 1118 -hsync +vsync Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9800 GT" EndSection Section "Screen" # Removed Option "metamodes" "1024x768 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "CustomEDID" "CRT-0:/home/charlesq/lg.bin" Option "TVStandard" "HD1080p" Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "1080p +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Yum install packages to an alternate directory, without chroot?

    - by Stefan Lasiewski
    I would like to use the Foswiki yum repository to install Foswiki (296 packages). The default installation path is /var/lib/, but I want to install it to an alternate location at /opt/www/. In the future, I still want to use yum to check for and apply updates to the packages. Is it possible to use yum to install packages to an location which is different then the default location provided by RPMs? Does Yum provide anything similar to ./configure --prefix=/usr/local/ or rpm --install --prefix=/opt/local? Yum provides the --installroot option, but that appears to primarily for chroot environments.

    Read the article

  • Cannot Boot, How to recover

    - by Kendor
    Am running 11.10 64-bit with Gnome-shell. Something happened late Friday whereby my machine never gets to the login screen. I do get to an Ubuntu splash logo, after that I get a text screen that it hangs on. The screen is referring to issues with mounting various network resources, including VMWare and also some references to my NAS that are in fstab. If I hit "esc" I can get to the GRUB menu and into recovery console. If I try to do a file system check, I run into a similar error screen that I see when trying to boot normally. A possible clue here is that during my last good session I made some mods to the /etc/hosts file to reference another system which I'm connecting to with Synergy. I don't believe I have hardware issues as I'm able to boot properly with a Live USB and connect to my network/Internet. A few more tidbits. I have regular Dejadups backups on my NAS. I have a good Clonezilla whole drive image which is 4-6 weeks old.. My home is encrypted. I thought I'd try blowing away my hosts file via live USB, but when I mounted the hard drive everything was read-only and I couldn't figure out how to replace it. P.S. I logged in via CLI and modded the host file to remove the entry I'd made, to no avail. System continue to gets stuck on the following: CIFS VFS: default security mechanism requested. The default security mechanism will be upgraded from ntlm to ntlmv2 in kernel version 3.1s Would love some sober advice on how to attack this.

    Read the article

  • 10 System LAN latency with ADSL modem as gateway

    - by itsoft3g
    Recently I expanded LAN in my office from 3 to 10 computers. Structure star topology, one ADSL Modem connected to One Switch which is again connected to 10 computers. Also we have Wifi device Netgear which is connected from switch. ADSL Modem acts as the DHCP Server, all the system will have default gateway IP (ADSL Modem's IP) Network latency is now become very high, All the chat severs disconnect often like google talk, skype etc, also internet become very very slow. when all the computer turned on. We have 4 Mbps Download and 100 Kbps upload Net speed. Its look like ADSL Modem cannot able to handle all the connections. I tried to setup a system as default gateway which will connect to modem, not sure how to do this. Please advice on this.

    Read the article

  • The Mac Tax

    - by Robert May
    One of our users was having difficulties with their mac and using some web software.  I decided to go peruse the landscape and see how much of a premium people were paying for their macs.  I priced out a Dell and a Mac from their websites.  I tried to get them as close to the same configuration, from a hardware standpoint, as I could.  I found the following: Apple Macbook Pro   Dell XPS 17 There are several important differences in the hardware: The mac doesn’t have a blueray player, but the dell does. The mac has a slightly slower processor. The mac claims to have a better battery, but doesn’t list the specifics, so there’s no way to tell. The mac doesn’t list the video card stats, so there’s no way to tell how comparable they are, but they’re probably close. The mac doesn’t come with any additional software.  No iWorks, iPhoto, etc.  They were left to their default of None, so arguably, the Dell is more functional out of the box. Other than changing the hardware specs to be close, all other configuration options were left at their default. So riddle me this, Batman:  Why do people buy Macs?  I have several dev buddies that own them, but I can’t justify the cost.  First, most of them load bootcamp and/or parallels at extra cost to run windows 7 and windows apps.  The hardware isn’t as good.  The price is almost twice as expensive. How do you justify the premium price? Technorati Tags: General

    Read the article

  • remastering knoppix: which version and what method should i use

    - by Stan_
    I'd like to remaster knoppix (mainly add and configure some software). I downloaded newest version (KNOPPIX-ADRIANE_V6.2CD-2009-11-18-EN.iso) but later i read that it has some other window manager as default, not kde... and i want to have kde on my remaster. Is kde included on that iso but it's not default or it's not included at all? If it's not there what knoppix version should i get for my remaster? My other question... I've seen some remastering scripts (with menus, etc) on knoppix forums, do any of these works with version i have? Or with version i should have if i need kde?

    Read the article

  • Is it possible to get xRandR to see two separate outputs with the nvidia driver?

    - by rumtscho
    I have two monitors, which I have set up with nvidia-settings in Twinview. The result: When I want to do something in xRandR, it does not function. It doesn't report one output per video card head, but a single output mapped to the combined area of both monitors: rumtscho@bradbury:~$ xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 3840 x 1440, current 3840 x 1440, maximum 3840 x 1440 default connected 3840x1440+0+0 0mm x 0mm 3840x1440 50.0* Now I promised somebody to help test a driver. The developer is using an open source driver for Intel video cards, and his driver assumes that there is more than one xRandR output, each mapped to a monitor. So I tried rewriting my xorg.conf to somehow get two outputs to show up, but failed. Googling showed that people faced with the xRandR-nvidia problem either stopped using xRandR and achieved what they needed with nvidia-settings, or changed their driver to nouveau. The first is not going to help in my situation, and I am not willing to give up the proprietary driver, because Compiz won't work without it. So does anybody know a way to get nvidia to actually pass on information on outputs to xRandR?

    Read the article

  • Virtual hosting in Varnish with individual vcl files for configuration

    - by Michael Sørensen
    I wish to use varnish to put in front of an apache and a tomcat on the same server. Depending on the ip requested, it goes to a different backend. This works. Now for most of the sites the default varnish logic will work just fine. However for some specific sites I wish to use custom VCL code. I can test for host name and include config files for the specific domains, but this only works inside the individual methods recv etc. Is there a way to include a complete set of instructions, in one file, per domain, without having to manage separate files for subdomain_recv, subdomain_fetch etc? And preferably without running seperate instances of varnish. When I try to include a file on the "root level" of default.vcl, I get a compilation error. Best regards, Michael

    Read the article

  • Implementing new required feature after software release

    - by TiagoBrenck
    Fake Scenario There is a software that was released 1 year ago. The software is to map and register all kind of animals on our planet. When the software was released, the client only needed to know the scientific name of the animal, a flag if it is in risk of extinction and a scale of dangerous(that is a fake software and specification, I don't want to discuss this here). There are already 100.000 animals records saved on DB. New Feature One year later, the client wants a new feature. It is really important to him to know the animals classes, and this is a required field. So he asks me to put a field to input the animal class, and this field is required. Or maybe where this animal was discovered. Problem I have already 100.000 recorded animals without a class or where it was discovered, but I need to insert a new column to storage this information and this column can't be null. I don't have a default value for this situation (there isn't a default animal class or where it was discovered). I don't want to keep the requirement rule only on my software, my DB must have this requirement too(I like to keep business rules on DB too). What are the alternatives to solve this situation? I am on a situation that this new feature cannot be previewed or reviewed for the existing records. The time already passed and I can't go back on time to get it

    Read the article

  • ASP.NET directories blocked from VisualSVN Server behind reverse proxy in IIS 6

    - by user143344
    I’ve got VisualSVN Server running behind a reverse proxy in IIS 6, Windows Server 2003. This isn’t ideal, but for the main web app on the server I’ve only got one IP address and SSL certificate available. Everything works except for when trying to commit to or browse the default ASP.NET directories (App_Browsers, App_Code, App_Data). SVN commits fail for these directories – which I believe is because IIS will never serve them by default. The reverse proxy uses a virtual directory in IIS – is there a change I can make in the web.config for this virtual directory to get around the issue?

    Read the article

  • Changing Admin Site URL (actually port) - how?

    - by TomTom
    I have a new install of the band new SharePoint 2010. I use host header identified site collections for everything. By default the admin site is on a random port. I would like to move the admin site to port 80, for the server name. As all sites have coded names (for example "intranet", "projects") this would allow administration via the server name - which is easier as external access does not have to remember the port number. How do I do this? I already changed the default URL, but the site (application) is still wrongly mapped. I dont find anything to change the IIS settings in the admin site. I possibly just miss it - so can anyone point me in the right direction?

    Read the article

  • Why don't languages use explicit fall-through on switch statements?

    - by zzzzBov
    I was reading Why do we have to use break in switch?, and it led me to wonder why implicit fall-through is allowed in some languages (such as PHP and JavaScript), while there is no support (AFAIK) for explicit fall-through. It's not like a new keyword would need to be created, as continue would be perfectly appropriate, and would solve any issues of ambiguity for whether the author meant for a case to fall through. The currently supported form is: switch (s) { case 1: ... break; case 2: ... //ambiguous, was break forgotten? case 3: ... break; default: ... break; } Whereas it would make sense for it to be written as: switch (s) { case 1: ... break; case 2: ... continue; //unambiguous, the author was explicit case 3: ... break; default: ... break; } For purposes of this question lets ignore the issue of whether or not fall-throughs are a good coding style. Are there any languages that exist that allow fall-through and have made it explicit? Are there any historical reasons that switch allows for implicit fall-through instead of explicit?

    Read the article

  • FeedValidator & Feedburner get 404 when accessing wordpress RSS feeds when permalinks are enabled.

    - by Wazbaur
    I'm helping a friend set up a self-hosted Wordpress blog + feedburner and I'm seeing a problem with the feeds that I'm finding somewhat mysterious. Using the default permalink structure (e.g., ?p=123) everything works as expected; I can follow the feed in Google reader, navigate to it manually, and set it up in feedburner. However, once I switch away from the default permalink structure, feedburner and feedvalidator both report that accessing the feed is returning HTTP-404 and Google reader no longer shows new posts (I'm assuming for the same reason), but I can navigate to the feed using a browser. When I do that it appears as though nothing is wrong; there is a feed there and it contains all the posts I expect it to have. I've re-started the feedburner & reader set-up from the beginning after changing the link structure, so I don't think they're doing anything silly like looking at the feed at its old address. I've seen people with similar problems in various other places but there doesn't seem to be a good answer anywhere.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Start vino-server (VNC) before login on Linux CentOS

    - by Dr. Gianluigi Zane Zanettini
    I'm using the default vino-server package to access my CentOS 6 workstation via VNC. It works ok, but only AFTER I locally login on the workstation. I need to have vino-server start BEFORE the login, right at the Gnome login screen where I choose username and password. Due to personal reasons, I need to use Vino and not vnc-server or any other packages. I already tried to insert /usr/libexec/vino-server & in /etc/gdm/Init/Default but this didn't solve the issue.

    Read the article

  • WebSVN accept untrusted HTTPS certificate

    - by Laurent
    I am using websvn with a remote repository. This repository uses https protocol. After having configured websvn I get on the websvn webpage: svn --non-interactive --config-dir /tmp list --xml --username '***' --password '***' 'https://scm.gforge.....' OPTIONS of 'https://scm.gforge.....': Server certificate verification failed: issuer is not trusted I don't know how to indicate to websvn to execute svn command in order to accept and to store the certificate. Does someone knows how to do it? UPDATE: It works! In order to have something which is well organized I have updated the WebSVN config file to relocate the subversion config directory to /etc/subversion which is the default path for debian: $config->setSvnConfigDir('/etc/subversion'); In /etc/subversion/servers I have created a group and associated the certificate to trust: [groups] my_repo = my.repo.url.to.trust [global] ssl-trust-default-ca = true store-plaintext-passwords = no [my_repo] ssl-authority-files = /etc/apache2/ssl/my.repo.url.to.trust.crt

    Read the article

  • How do you deal with poor management [closed]

    - by Sybiam
    I come from a company where during a project, we saw the client 3 time during the whole project. We were never informed when did the client came in office in order to discuss with him about his requirements. I did setup redmine and told them that if they have any request they can post an issue there. But they never really used redmine to publish anything. They would instead: harass a team member on the phone at any time of the day or night hand us over sheets of paper with new requests or changes hand us over new design (graphical) They requested how much time it would take us to finish the project, I gave them a date and a week to test everything and deployment. I calculated that time taking into account the current features we had to do. And then blamed us that our deadline was wrong and that we lied. But the truth is that one week before that deadline they added a couple of monster feature from nowhere and that week were we were supposed to test and deploy, my friends spent all day in the office changing all little things. After that project, my friend got some kind of depression and got scared everytime his phone rang. They kind of used him as a communication proxy. After that project of hell, (every body got pissed off on that project) as far as I know the designer who was working with us left work after that project and she had some kind of issue too with managers. My team also started looking for work somewhere else. At first I tried to get things straight with management, I tried to make a meeting to discuss about the communication issues and so on.. What really pissed me off and made me leave that job for good is the following. Me: "We have to discuss about what went wrong on the last project. It's quite important" Him: "Lets talk about it in a week or two. Just make a list of all the things you did wrong" Me: "We already have a new project and we want to prevent what happened on the last project to happen again" Him: "Just do it and well have our meeting in a week, make a list of all the thing you did wrong." It kind of ended there then he organized a meeting at a moment I wasn't unable to come. My friend discussed with him and tried to explained him that we really had to discuss about organization issue on how to manage a project. And his answer was pretty much: "During the meeting I don't want to ear how you want to us to manage a project but I want to know what you guys did wrong" After that I felt it wasn't even worth it discussing anything since they weren't even ready listening to us. Found a new job and I'm pretty happy with my choice. I'd like to know how you'd handle such situation. Is there anything to do to solve communication problem? After that project my friend got a depression and some other employee had their down too as far as I know. I wonder what else we can do other than leave these place as soon as possible. Feel sad for the people that are still there and get screamed at just because they need money in order to eat and finding an other job like that isn't that easy. note I died a little when our boss asked us to make a list of things we (programmers) did wrong. This is probably the stupidest request I ever got. If everybody thinks they did everything right, it doesn't mean that there is no problems. Individual problem are rarely the big issue. Colleagues help each others and solve theses issues to prevent problems.

    Read the article

  • 1080p Screen resolution problem after 10.04 to 12.04 update

    - by Ale
    I have a Samsung LCD 40" with a NVidia GeForce 6150SE nForce 430 Card. I recently upgraded from 10.04 to 12.04 and the best resolution I can get is 1360x768. I've tried the propietary drivers available on the repository kmod:nvidia_current kmod:nvidia_173_updates kmod:nvidia_current_updates kmod:nvidia_96 kmod:nvidia_96_updates kmod:nvidia_173 I've also downloaded latest from NVidia's Web, version: 295.40. But still no luck. With Nouveau driver, I can only get 1024x768. I know there is no problem with my hardware (video card, cable and monitor), I was using it perfectly on 10.04. Can anybody suggest something else I could try, to get my 1920x1080 resolution back? Thanks in advance. Here are some more information, that I got reading other similar posts on askubuntu. $ lspci | grep VGA 00:0d.0 VGA compatible controller: NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2) $ xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 240, current 1360 x 768, maximum 1360 x 768 default connected 1360x768 0 0 0mm x 0mm 1360x768 50.0 52.0* 1024x768 51.0 800x600 53.0 54.0 55.0 680x384 56.0 57.0 640x480 58.0 576x432 59.0 512x384 60.0 400x300 61.0 62.0 63.0 320x240 64.0

    Read the article

  • Custom distro using ubuntu 12.04

    - by user89707
    I am creating the custom operating system using the ubuntu 12.04. When ubuntu login from the light dm -- it shows ubuntu desktop . i need to change to the my os name. I need to replace the ambaince dark icon to fs icon by default for all the login and live cd. How to permanentely change the os name It should not change even the customer update the operating system too. I am using the remastersys. I am looking to develop the new distro. like mint ,, If i had an breif explanation of the creation of the repository and maintaining the updates . it will be more helpfull. Kindly provind the link for creating the full fledged os based on the ubuntu .. like mint, Snowlinux, etc did.. replace the grub with burg for default installation If remastersys is not good . then provide me some other tool to create . I am not having the high speed internet

    Read the article

  • Routing between two VLANs on Single Dell 6200 Switch

    - by jenglee
    I want to be able to route between two vlans that I have created and I am not sure how to go about it. So I have created, VLAN 5 with IP Address 192.168.5.1/24 and VLAN 10 with IP Address 192.168.0.1/24 //main IP addresses that I use. How would I be able to get (for example) the IP Address 192.168.0.144 to see any ip addresses in 192.168.5.1/24? Also do you have to set a default gateway for each VLAN or do you set the default gateway for the switch.

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >