Search Results

Search found 8364 results on 335 pages for 'unable'.

Page 191/335 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Oracle Virtualbox does not open since upgrade

    - by Langjan
    After upgrading to Ubuntu 12.10, I have been unable to restart my Oracle virtualbox. jan@jan-System-Product-Name:~$ sudo /etc/init.d/vboxdrv setup sudo: /etc/init.d/vboxdrv: command not found Where do I go from here? Can anyone help, please? I tried to install virtualbox via these commands: echo "deb http://download.virtualbox.org/virtualbox/debian $(lsb_release -sc) contrib" | sudo tee /etc/apt/sources.list.d/virtualbox.list wget -q http://download.virtualbox.org/virtu...racle_vbox.asc -O- | sudo apt-key add - sudo apt-get update sudo apt-get install virtualbox-4.2 Attempts to install via package manager vbox would not start. These error reports are received: Because the USB 2.0 controller state is part of the saved VM state, the VM cannot be started.To fix this problem, either install the 'Oracle VM VirtualBox Extension Pack' or disable USB 2.0 support in the VM settings (VERR_NOT_FOUND). Result Code: NS_ERROR_FAILURE (0x80004005) Component: Console Interface: IConsole {db7ab4ca-2a3f-4183-9243-c1208da92392} I installed the extension pack, no change in result. I have added myself as user, but the error report says user must be added, when I redo add user, it says user is already added. The following outputs were also received:   Failed to open a session for the virtual machine Nuwe skelm. Implementation of the USB 2.0 controller not found! I cannot access the USB 2.0 setting to disable it. Where do I go from here, please?

    Read the article

  • Keep your Root Authorities up to date

    - by John Breakwell
    Originally posted on: http://geekswithblogs.net/Plumbersmate/archive/2013/06/20/keep-your-root-authorities-up-to-date.aspxBy default, Windows will automatically update it’s internal list of trusted root authorities as long as the Update Root Certificates function is installed. This should be enabled by default and takes manual intervention to remove it. With this component enabled, the following happens: If you are presented with a certificate issued by an untrusted root authority, your computer will contact the Windows Update Web site to see if Microsoft has added the CA to its list of trusted authorities. If it has been added to the Microsoft list of trusted authorities, its certificate will automatically be added to your trusted certificate store. If the component is not installed and a certificate from an untrusted CA is encountered then the following text will be seen: This is an inconvenience for the person browsing the site as they need to click to continue. Applications, though, will be unable to proceed and will throw an exception. Example: ERROR_WINHTTP_SECURE_FAILURE 12175 (0x00002F8F) One or more errors were found in the Secure Sockets Layer (SSL) certificate sent by the server. If you look at the certificate’s properties, you can see the “Issued by:” value:   This must match a Trusted Root Certificate Authority in the current user’s certificate store.   So turn on automatic updating of trusted root authority certificates. For Windows Vista and above, this option is controlled through Group Policy. See the “To Turn Off the Update Root Certificates Feature by Using Group Policy” section of the following Technet article: Certificate Support and Resulting Internet Communication in Windows Vista If Windows Update is a blocked site then download and deploy the latest pack of root certificates from Microsoft: Update for Root Certificates For Windows XP [May 2013] (KB931125)   Failing that, find a machine that has the latest root certificates installed and export them from there: Open up the Certificates console. Right-click the required Trusted Root Certificate Authority certificate Choose Export from “All Tasks” to open up the Certificate Export Wizard Choose an export file format – DER should be fine Provide a file name and complete the export. Move the file to the machine that’s missing the certificate Right-click the file and choose “Install Certificate” to open up the Certificate Import Wizard Allow the wizard to automatically select the certificate store and complete the import On a side note, for troubleshooting certificate issues it can be helpful to clear the SSL state:

    Read the article

  • How to create an Ubuntu 12.10 live CD?

    - by B Biswas
    I downloaded the Ubuntu 12.10 installer from Ubuntu website. However, I find that it is not an iso image and I am unable to create a live CD (or DVD) from it. I could not find any help from Ubuntu website as well as internet. Please help. PS - My OS is Windows XP. The Ubuntu installer I downloaded from Ubuntu website is a zip file. I unzipped the file and it has a wubi file. PS - Thanks. I could create a Live CD. 1) First I tried to do it in my laptop which has Win 7. It was showing the Ubuntu installer as a zip file and could not able to burn it in to a DVD. At that time I raised the question. 2) Later I copied the installer in my desktop which has Win XP. There the installer is shown as an ISO file and I burnt it in to a DVD and created the Live CD. This is working nicely in the the desktop. 3) I tried to run the Live Cd in my Laptop which is an AMD machine, the system does not boot up. 4) In my office desktop which has Win 7 the Ubuntu installer is showing as an ISO file. My questions are as follows: A) Why the Ubuntu installer file is showing differently in different machines? B) Why the Live CD is not working in my Laptop?

    Read the article

  • Error mounting: mount exited with exit code 13

    - by Mike Williamson
    I keep a windows partition on my laptop for the occaisional bit of Photoshop work. A while ago I noticed that Windows had disappeared from my grub boot menu and when I try to mount the windows partion, my system hangs for a bit and then I get this: Unable to mount 105 GB Filesystem Error mounting: mount exited with exit code 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to calculate free MFT records: Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. It seems that chkdsk is a windows command but since I can't boot into windows (since its the windows partition that is the problem) I'm not sure what to do. Here is the output of fdisk to give you the lay of the land: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x98000000 Device Boot Start End Blocks Id System /dev/sda1 1 10199 81923436 83 Linux /dev/sda2 * 10200 22947 102398310 7 HPFS/NTFS /dev/sda3 22948 29164 49938052+ 83 Linux /dev/sda4 29165 30401 9936202+ 5 Extended /dev/sda5 29165 30401 9936171 82 Linux swap / Solaris Any guidance would be appreciated!

    Read the article

  • Downloading specific video renditions in WebCenter Content

    - by Kyle Hatlestad
    I recently had a question come up on one of my previous blog articles about downloading a specific video rendition.  When accessing image renditions, you simply need to pass in the 'Rendition=<rendition name>' parameter on the GET_FILE service and it will be returned.  But when you try that with videos, you get the error message, "Unable to download '<Content ID>'. The rendition or attachment '<Rendition Name>' could not be found in the list manifest of the revision with internal revision ID '<dID>'. Through the interface, it exposes the ability to download, but utilizes the Content Basket to bundle one or more videos and download them as a zip.   I had never tried this with videos, but thought they had worked the same way.  Well, it turns out you need to pass in an extra parameter in the case of videos.  So if you pass in parameter of 'AuxRenditionType=media', that will allow the GET_FILE service to download the video (e.g. http://server/cs/idcplg?IdcService=GET_FILE&dID=11012&dDocName=WCCBASE9010812&allowInterrupt=1 &Rendition=QuickTime&AuxRenditionType=media).  And if you haven't seen the David After Dentist video, I'd highly recommend it! 

    Read the article

  • Pen drive cannot be mounted

    - by DUKE
    I get the following message when I insert my pen drive to USB port. I am unable to format it as well. (This pen drive earlier used as a bootable drive for Ubuntu 12.04) I am using Ubuntu 12.04 LTS. lsusb command in the terminal gives the following message: siva@siva:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 0424:2513 Standard Microsystems Corp. Bus 002 Device 002: ID 0424:2513 Standard Microsystems Corp. Bus 001 Device 003: ID 0a5c:4500 Broadcom Corp. BCM2046B1 USB 2.0 Hub (part of BCM2046 Bluetooth) Bus 001 Device 004: ID 1c4f:0002 SiGma Micro Keyboard TRACER Gamma Ivory Bus 002 Device 003: ID 05ac:8242 Apple, Inc. IR Receiver [built-in] Bus 002 Device 004: ID 093a:2510 Pixart Imaging, Inc. Optical Mouse Bus 001 Device 007: ID 05ac:8281 Apple, Inc. Bus 001 Device 008: ID 0951:1653 Kingston Technology

    Read the article

  • Can't install Apache 2.2.22 on Ubuntu 13.10

    - by B18C1
    My work environment requires Apache 2.2.22 instead of the latest version of 2.4. My machine is currently running Ubuntu 13.10. When I use Synaptic or apt-get it will not allow me to choose an older version of Apache than 2.4. So my question is, how can I force an install of Apache 2.2.22 on Ubuntu 13.10 using Synaptic or apt-get. When I do try to specify the version I get the following: sudo apt-get install apache2=2.2.22-1ubuntu1 [sudo] password for b18c1: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: apache2 : Depends: apache2-mpm-worker (= 2.2.22-1ubuntu1) but it is not going to installed or apache2-mpm-prefork (= 2.2.22-1ubuntu1) but it is not going to be installed or apache2-mpm-event (= 2.2.22-1ubuntu1) but it is not going to be installed or apache2-mpm-itk (= 2.2.22-1ubuntu1) but it is not going to be installed Depends: apache2.2-common (= 2.2.22-1ubuntu1) but it is not going to be installed E: Unable to correct problems, you have held broken packages.

    Read the article

  • Drawing particles with CPU instead of GPU (XNA)

    - by Helix
    I'm trying out modifications to the following particle system. http://create.msdn.com/en-US/education/catalog/sample/particle_3d I have a function such that when I press Space, all the particles have their positions and velocities set to 0. for (int i = 0; i < particles.GetLength(0); i++) { particles[i].Position = Vector3.Zero; particles[i].Velocity = Vector3.Zero; } However, when I press space, the particles are still moving. If I go to FireParticleSystem.cs I can turn settings.Gravity to 0 and the particles stop moving, but the particles are still not being shifted to (0,0,0). As I understand it, the problem lies in the fact that the GPU is processing all the particle positions, and it's calculating where the particles should be based on their initial position, their initial velocity and multiplying by their age. Therefore, all I've been able to do is change the initial position and velocity of particles, but I'm unable to do it on the fly since the GPU is handling everything. I want the CPU to calculate the positions of the particles individually. This is because I will be later implementing some sort of wind to push the particles around. How do I stop the GPU from taking over? I think it's something to do with VertexBuffers and the draw function, but I don't know how to modify it to make it work.

    Read the article

  • Using JuJu with private Openstack cloud deployment?

    - by user76054
    I'm seeing a number of problems trying to use JuJu with our internally deployed Openstack cloud. Most of this appears to be centered around DNS host resolution as well as the need to deal with our company's internal HTTP proxies. Our Openstack deployment relies upon an unroutable 172.16.0.0/12 block of addresses for VLAN allocation to each project (tenant) hosted on our internal cloud. User's have the option of assigning one or more floating addresses to instances, allocated from a block of routable addresses on our internal companies LAN. Currently, Openstack doesn't register instance names with anything other than the DNSMASQ service running on the cloud controller. As such, there's no way to resolve this address through our internal DNS hierarchy (this issue has already been reported as Bug #945505). As such, even though I can bootstrap my JuJu server node, I can't connect to it with the JuJu client, since it can't resolve the local (private) network name. I am able to ssh to the node, once I've assigned it an internally routable (i.e. floating) address. Which leads to the next issue. Next, to install software on an instance running in our cloud, it must have our internal proxy address defined - either in the apt.conf file or via environment variables. Unfortunately, when bootstrapping the server node, there's no provision to pass this info into a instance via JuJu environment.yaml file (if this is even the best way to handle this issue). As a result, the bootstrap node is unable to install the required packages. I'm assuming (dangerous, I know) that the way that I've deployed Openstack in our internal environment is probably not unique. Has anyone else encountered these issues? And more importantly, are work arounds available? Regards, Ross

    Read the article

  • Phishing attack stuck with jsp loginAction.do page?

    - by user970533
    I'm testing a phishing website on a staged replica of an jsp web-application. I'm doing the usual attack which involves changing the post and action field of source code to divert to my own written jsp script capture the logins and redirect the victim to the original website. It looks easy, but trust me, it's has been me more then 2 weeks and I cannot write the logins to the text file. I have tested the jsp page on my local wamp server it works fine. In staged, when I click on the ok button for user/password field I'm taken to loginAction.do script. I checked this using the tamper data add-on on Firefox. The only way I was able to make my script run was to use burp proxy intercept the request and change action parameter to refer my uploaded script. I want to know what does an loginAction.do? I have googled it - it's quite common to see it in jsp application. I have checked the code; there is nothing that tells me why the page always points to the .do script instead of mine. Is there some kind of redirection in Tomcat? I like to know. I'm unable to exploit this attack vector? I need the community's help.

    Read the article

  • Plymouth package broken... Can I safely remove it? Other solution to repare it?

    - by Julien Gorenflot
    I have a broken package... So far nothing horrible. The problem is that it is Plymouth, and it seems that if I remove it, I will remove half of the packages of my system... So here is my question: if I actually remove, or even purge plymouth; will I at least have a terminal left after it to reinstall it? Or am I definitely doomed? Just to illustrate what I say; here is the result of an apt-get --reinstall install plymouth: julien@julien-desktop:~$ sudo apt-get --reinstall install plymouth Reading package lists... Done Building dependency tree Reading state information... Done Reinstallation of plymouth is not possible, it cannot be downloaded. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: plymouth : Depends: libdrm-nouveau1 (>= 2.4.11-1ubuntu1~) but it is not installable Recommends: plymouth-themes-all but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). or an apt-get -f install (well basically it is the same) julien@julien-desktop:~$ sudo apt-get -f install [sudo] password for julien: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... failed. The following packages have unmet dependencies: plymouth : Depends: libdrm-nouveau1 (>= 2.4.11-1ubuntu1~) but it is not installable Recommends: plymouth-themes-all but it is not installable E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. E: Unable to correct dependencies julien@julien-desktop:~$ Any idea would be very welcome...

    Read the article

  • Missing /dev/xconsole causes rsyslog to stop as well as all other services

    - by George Van Tuyl
    We are running Ubuntu-10.04.04LTS in Hyper-V environments. We found that the services ssh http or anything else stopped because the rsyslog daemon had died with the message unable to find the /dev/xconsole file. I fixed it temporarily with the following. FILE=/dev/xconsole if [ -e $FILE ]; then echo "$FILE exists Carry on!" else mknod -m 640 /dev/xconsole c 1 3 chown syslog:adm /dev/xconsole echo "Created $FILE." fi The problem is that I can not get rsyslog daemon to process these 8 lines when I restart the daemon. Also restarting the daemon removes the /dev/xconsole file and we are back to all service stopped. In addressing this problem I have inserted the if--fi lines after the start and restart conditions in the rsyslog script. The problem is I do not get an echo to stdio. Does someone have an idea on how to make the rsyslog report to stdio when it creates the /dev/xconsole device. Thanks George Van Tuyl

    Read the article

  • fern-wifi-cracker "Exec format error" breaks packaging system

    - by cunix
    root@cunix:/home/cunix# sudo apt-get remove fern-wifi-cracker Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libqt4-test libqt4-sql-mysql mysql-common libqt4-xmlpatterns libqt4-help python-qt4 python-sip libqt4-sql-sqlite libqt4-sql macchanger libqt4-designer libmysqlclient16 python-scapy libqt4-scripttools Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: fern-wifi-cracker 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 3,514kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 167661 files and directories currently installed.) Removing fern-wifi-cracker ... dpkg (subprocess): unable to execute installed pre-removal script (/var/lib/dpkg/info/fern-wifi-cracker.prerm): Exec format error dpkg: error processing fern-wifi-cracker (--remove): subprocess installed pre-removal script returned error exit status 2 Errors were encountered while processing: fern-wifi-cracker E: Sub-process /usr/bin/dpkg returned an error code (1) how to uninstall?

    Read the article

  • Fix Package is in a very bad inconsistent state

    - by Benjamin Piller
    I can't update my system because it freezes while installing a third-party update (zramswap-enabler)!! Sometimes I get the following message in Update manager: Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: E:The package zramswap-enabler needs to be reinstalled, but I can't find an archive for it. I tried to remove the zramswap-enabler, but it's impossible because I get the following message: dpkg: error processing zramswap-enabler (--remove): Package is in a very bad inconsistent state - you should reinstall it before attempting a removal. Errors were encountered while processing: zramswap-enabler E: Sub-process /usr/bin/dpkg returned an error code (1) Actually i would really reinstall that package, but it is unable to do it! If i remove this third-party PPA then the system is warning me about a very very serious problem. So why can i not install/reinstall/remove/update this package and why freezes the updater if i try to update? -S O L U T I O N-: Okay! Finally i found the solution for this Problem! Step 1. Make sure that your PPA is correctly Step 2. Remove the broken package via the following command: sudo dpkg --remove --force-remove-reinstreq zramswap-enabler Step 3. install the package again (sudo apt-get install zramswap-enabler) Step 4. After restart (not necessary) you are able to install the updates correctly! Actually you can fix any "Package is in a very bad inconsistent state” Issues with this solution!!!

    Read the article

  • Why My Adsense Account is not accepted

    - by Muhammad Adeel Zahid
    I have created a blog on blogger few days ago and created two blog entries on it. when i try creating AdSense account with blog's address it does not accept the application due to PageType. i have searched around on the net and found that its probably due to duplicate or insufficient content on the blog but either of my blog entry is more than 2,000 words and i have literally typed 80 percent of its content with only few code blocks copied from other sites. Below is content of email i received as response to my application. Hello, Thank you for your interest in Google AdSense. Unfortunately, after reviewing your application, we're unable to accept you into AdSense at this time. We did not approve your application for the reasons listed below. Issues: - Page Type Further detail: Page Type: In order to participate in Google AdSense, publishers' websites and application information must satisfy the following guidelines: Your website must be your own top-level domain (www.example.com and not www.example.com/mysite). You must provide accurate personal information with your application that matches the information on your domain registration. Your website must contain substantial, original content. Your site must comply with Google AdSense program policies: https://www.google.com/adsense/policies" which include Google's webmaster quality guidelines: http://www.google.com/support/webmasters/bin/answer.py?answer=35769#quality . If your site satisfies the above criteria in the future, please resubmit your application and we'll review it as soon as possible My Personal credentials are also same as they appear in my google account. i can't figure out the problem. Any Help/Suggestion is highly appreciated. regards

    Read the article

  • Debugging/Logging Techniques for End Users

    - by James Burgess
    I searched a bit, but didn't find anything particularly pertinent to my problem - so please do excuse me if I missed something! A few months back I inherited the source to a fairly-popular indie game project and have been working, along with another developer, on the code-base. We recently made our first release since taking over the development but we're a little stuck. A few users are experiencing slowdowns/lagging in the current version, as compared to the previous version, and we are not able to reproduce these issues in any of our various development environments (debug, release, different OSes, different machines, etc.). What I'd like to know is how can we go about implementing some form of logging/debugging mechanism into the game, that users can enable and send the reports to us for examination? We're not able to distribute debug binaries using the MSVS 2010 runtimes, due to the licensing - and wouldn't want to, for a variety of reasons. We'd really like to get to the bottom of this issue, even if just to find out it's nothing to do with our code base but everything to do with their system configuration. At the moment, we just have no leads - and the community isn't a very technically-savvy one, so we're unable to rely on 'expert' bug reports or investigations. I've seen the debug logging mechanism used in other applications and games for everything from logging simple errors to crash dumps. We're really at a loss at this stage as to how to address these issues, having been over every commit to the repository from the previous to the current version and not finding any real issues.

    Read the article

  • Fun with upgrading and BCP

    - by DavidWimbush
    I just had trouble with using BCP out via xp_cmdshell. Probably serves me right but that's a different issue. I got a strange error message 'Unable to resolve column level collations' which turned out to be a bit misleading. I wasted some time comparing the collations of the the server, the database and all the columns in the query. I got so desperate that I even read the Books Online article. Still no joy but then I tried the interweb. It turns out that calling bcp without qualifying it with a path causes Windows to search the folders listed in the Path environment variable - in that order - and execute the first version of BCP it can find. But when you do an in-place version upgrade, the new paths are added on the end of the Path variable so you don't get the latest version of BCP by default. To check which version you're getting execute bcp -v at the command line. The version number will correspond to SQL Server version numbering (eg. 10.50.n = 2008 R2). To examine and/or edit the Path variable, right-click on My Computer, select Properties, go to the Advanced tab and click on the Environment Variables button. If you change the variable you'll have to restart the SQL Server service before it takes effect.

    Read the article

  • exact answer for “what is j2ee?” - job interview

    - by shuuchan
    I'd like to ask if someone of you knows the exact meaning of JEE. That's because a collegue of mine was asked this question in a job interview, and was "unable to answer properly"... to speak with his interwiewer's words. And when he told me what he said to his interviewer I got really surprised, since it was more or less what I would have answered myself - in a concise form, the first paragraph of this article. J2EE (Java 2 Platform, Enterprise Edition) is a Java platform designed for the mainframe-scale computing typical of large enterprises. Sun Microsystems (together with industry partners such as IBM) designed J2EE to simplify application development in a thin client tiered environment. J2EE simplifies application development and decreases the need for programming and programmer training by creating standardized, reusable modular components and by enabling the tier to handle many aspects of programming automatically. That seems not to be enough, since the interviewer asked for "more precise and less general definition". Is there really a more precise definition for JEE? Or did my colleague just find the fussiest-interviewer-ever? :)

    Read the article

  • nano syntax highlighting not working for all languages

    - by Dejan
    I have a funny situation where I am unable to add custom highlighting definitions to my nano text editor. The funny thing is that the predefined work like a charm and can be edited. But I have created a new one for js with $ sudo touch js.nanorc $ sudo nano js.nanorc my current js.nanorc looks like this: syntax "JavaScript" "\.js$" color blue "\<[-+]?([1-9][0-9]*|0[0-7]*|0x[0-9a-fA-F]+)([uU][lL]?|[lL][uU]?)?\>" color blue "\<[-+]?([0-9]+\.[0-9]*|[0-9]*\.[0-9]+)([EePp][+-]?[0-9]+)?[fFlL]?" color blue "\<[-+]?([0-9]+[EePp][+-]?[0-9]+)[fFlL]?" color brightblue "[A-Za-z_][A-Za-z0-9_]*[[:space:]]*[(]" color black "[(]" color cyan "\<(break|case|catch|continue|default|delete|do|else|finally)\>" color cyan "\<(for|function|get|if|in|instanceof|new|return|set|switch)\>" color cyan "\<(switch|this|throw|try|typeof|var|void|while|with)\>" color cyan "\<(null|undefined|NaN)\>" color brightcyan "\<(true|false)\>" color green "\<(Array|Boolean|Date|Enumerator|Error|Function|Math)\>" color green "\<(Number|Object|RegExp|String)\>" color red "[-+/*=<>!~%?:&|]" color magenta "/[^*]([^/]|(\\/))*[^\\]/[gim]*" color yellow ""(\\.|[^"])*"|'(\\.|[^'])*'" color magenta "\\[0-7][0-7]?[0-7]?|\\x[0-9a-fA-F]+|\\[bfnrt'"\?\\]" color brightblack "(^|[[:space:]])//.*" color brightblack start="/\*" end="\*/" color brightwhite,cyan "TODO:?" color ,green "[[:space:]]+$" color ,red " +" If anyone can see the problem then please tel me

    Read the article

  • I can't upgrade to 12.10 Quantal Quetzal

    - by JamesTheAwesomeDude
    After hearing about some of the features in 12.10, I decided to upgrade my Ubuntu from 12.04 LTS to the newer 12.10. However, when I try to upgrade it, I start the update manager, like always, and I start the distro upgrade, just like I do every time there's a new version, but at the end of the "Setting new software channels" stage, it fails, saying: Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: E: Unable to correct problems, you have held broken packages. This can be caused by: Upgrading to a pre-release version of Ubuntu Running the current pre-release version of Ubuntu Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. What should I do? I understand that I have some sort of "broken" package on my system, but how do I identify and remove the problem? Note: At the start of the "Setting new software channels" stage, I was presented with this: Third party sources disabled Some third party entries in your sources.list were disabled. You can re-enable them after the upgrade with the 'software-properties' tool or your package manager.

    Read the article

  • Low-level GPU code and Shader Compilation

    - by ktodisco
    Bear with me, because I will raise several questions at once. I still feel, though, that overall this can be treated as one question that may be answered succinctly. I recently dove into solidifying my understanding of the assembly language, low-level memory operations, CPU structure, and program optimizations. This also sparked my interest in how higher-level shading languages, GLSL and HLSL in particular, are compiled and optimized, as well as what formats they are reduced to before machine code is generated (assuming they are not converted directly into machine code). After a bit of research into this, the best resource I've found is this presentation from ATI about the compilation of and optimizations for HLSL. I also found sample ARB assembly code. This sort of addressed my original curiosity, but it raised several other questions. The assembler code in the ATI presentation seems like it contains instructions specifically targeted for the GPU, but is this merely a hypothetical example created for the purpose of conceptual understanding, or is this code really generated during shader compilation? If so, is it possible to inspect it, or even write it in place of the higher-level syntax? My initial searches for an answer to the last question tell me that this may be disallowed, but I have not dug too deep yet. Also, along the same lines, are GLSL shader programs compiled into ARB assembly code before machine code is generated, and is it possible to write direct ARB assembly? Lastly, and perhaps what I am most interested in finding out: are there comprehensive resources on shader compilation and low-level GPU code? I have been unable to find any thus far. I ask simply because I am curious :)

    Read the article

  • Properly force SSL with .htaccess, no double authentication

    - by cwd
    I'm trying to force SSL with .htaccess on a shared host. This means there I only have access to .htaccess and not the vhosts config. I know you can put a rule in the VirtualHost config file to force SSL which will be picked up there (and acted upon first), preventing double authentication, but I can't get to that. Here's the progress I've made: Config 1 This works pretty well but it does force double authentication if you visit http://site.com - once for http and then once for https. Once you are logged in, it automatically redirects http://site.com/page1.html to the https coutnerpart just fine: RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteEngine on RewriteCond %{HTTP_HOST} !(^www\.site\.com*)$ RewriteRule (.*) https://www.site.com$1 [R=301,L] AuthName "Locked" AuthUserFile "/home/.htpasswd" AuthType Basic require valid-user Config 2 If I add this to the top of the file, it works a lot better in that it will switch to SSL before prompting for the password: SSLOptions +StrictRequire SSLRequireSSL SSLRequire %{HTTP_HOST} eq "site.com" ErrorDocument 403 https://site.com It's clever how it will use the SSLRequireSSL option and the ErrorDocument403 to redirect to the secure version of the site. My only complaint is that if you try and access http://site.com/page1.html it will redirect to https://site.com/ So it is forcing SSL without a double-login, but it is not properly forwarding non-SSL resources to their SSL counterparts. Regarding the first config, Insyte mentioned "using mod_rewrite to perform a simple redirect is a bit of overkill. Use the Redirect directive instead. It's possible this may even fix your problem, as I believe mod_rewrite rules are some of the last directives to be processed, just before the file is actually grabbed from the filesystem" I have not had no such luck on finding a force-ssl config option with the redirect directive and so have been unable to test this theory.

    Read the article

  • dpkg behaving strangely?

    - by Tom Henderson
    When I use apt to get a package, I have been receiving the same error message. Here is an example trying to install wicd (which is already installed): Reading package lists... Building dependency tree... Reading state information... wicd is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 3 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up tex-common (2.06) ... debconf: unable to initialize frontend: Dialog debconf: (Dialog frontend requires a screen at least 13 lines tall and 31 columns wide.) debconf: falling back to frontend: Readline Running mktexlsr. This may take some time... done. No packages found matching texlive-base. dpkg: error processing tex-common (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of texlive-binaries: texlive-binaries depends on tex-common (>= 2.00); however: Package tex-common is not configured yet. dpkg: error processing texlive-binaries (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of dvipng: dvipng depends on texlive-base-bin; however: Package texlive-base-bin is not installed. Package texlive-binaries which provides texlive-base-bin is not configured yet. dpkg: error processing dvipng (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: tex-common texlive-binaries dvipng E: Sub-process /usr/bin/dpkg returned an error code (1) I'm not sure if this is a problem with apt or with dpkg, but it certainly doesn't look good!

    Read the article

  • MEncoder Install on Ubuntu

    - by Tauqeer Ahmad
    I am writing this after checking almost all the posts but none of those solved my problem. I am trying to install mencoder to process some videos but there are strange errors coming. For examples when I try sudo apt-get install mencoder the following errors comes out: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: mencoder : Depends: mplayer Depends: libasound2 (> 1.0.24.1) Depends: libavcodec53 (>= 4:0.8~beta2-2) but it is not installable or libavcodec-extra-53 (>= 4:0.8~beta2-2) but it is not going to be installed Depends: libavformat53 (>= 4:0.8~beta2-2) but it is not installable or libavformat-extra-53 (>= 4:0.8~beta2-2) but it is not going to be installed Depends: libcdparanoia0 (>= 3.10.2+debian) but it is not installable Depends: libenca0 (>= 1.9) but it is not installable Depends: libfontconfig1 (>= 2.8.0) but it is not installable Depends: libgif4 (>= 4.1.4) but it is not installable Depends: libjpeg8 (>= 8c) but it is not installable Depends: liblzo2-2 but it is not installable Depends: libsmbclient (>= 3.0.24) but it is not installable Depends: libspeex1 (>= 1.2~beta3-1) but it is not installable Depends: libtheora0 (>= 1.0) but it is not installable E: Unable to correct problems, you have held broken packages. Can anyone help to solve this issue. I tried to find static builds of MEncoder but could not.

    Read the article

  • Missing Operating System after trying to upgrade to Ubuntu 11

    - by Mauricio
    there! After trying to upgrade from Ubuntu 10.04 to 11, the upgrading process stopped when running and then I got an "out of disk, grub rescue" message when booting. After running Boot Repair, I got this results. Now I get "Missing Operating System" whent trying to boot. Bellow I show some results from some commands I gather from help foruns, but I still reached no solution. Could you please help me? Any enlightment will be very helpful! Disk Utility says "Disk has a few bad sectors". When trying to run the Self-test I get "FAILED (Read)" Here we have what Gparted says about the /dev/sda1 partition (ext4): Flags: boot Status: not mounted Warning: e2label: Attempt to read block from filesystem resulted in short read while trying to open /dev/sda1Couldn`t find valid filesystem superblockUnable to read the contents of this filesystem! From sudo fdisk -lI got: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000e0596 Device Boot Start End Blocks Id System/dev/sda1 * 2048 607428607 303713280 83 Linux/dev/sda2 607430654 625141759 8855553 5 Extended/dev/sda5 607430656 625141759 8855552 82 Linux swap / SolarisDisk /dev/sdb: 320.1 GB, 320072933376 bytes255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c3c41 Device Boot Start End Blocks Id System /dev/sdb1 * 63 625137344 312568641 c W95 FAT32 (LBA) " and fromsudo fdisk /dev/sda1I got fdisk: unable to read /dev/sda1: Inappropriate ioctl for device` From sudo mount /dev/sda1 /mntI got: mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so From sudo update-grubI got: error: cannot read from `/dev/sda'. /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?).

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >