Search Results

Search found 16218 results on 649 pages for 'compiler errors'.

Page 377/649 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • All files on automounted NTFS partition are marked as executable

    - by MHC
    I have set up an NTFS partition to automount via fstab: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda7 during installation UUID=e63fa8a2-432f-4749-b9db-dab328807d04 / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda4 during installation UUID=e9ad1bb4-7c1f-4ea9-a6a5-799dfad71c0a /boot ext4 defaults 0 2 # /home was on /dev/sda8 during installation UUID=eda8c755-5448-4de8-b58c-9cb75823c22d /home ext4 defaults 0 2 # swap was on /dev/sda9 during installation UUID=804ff3a7-e5dd-406a-b63c-e8f3c635fbc5 none swap sw 0 0 #Windows-Partition UUID=368CEBC57807FDCD /media/Share ntfs defaults,uid=1000,gid=1000,noexec 0 0 As you can see I have added the noexec bit to the configuration. Why? Because any file I create on or move to the partition is automatically marked as executable. The problem is that there is no way of changing that through nautilus. I cannot uncheck the "Allow executing file as program" option. The noexec option doesn't help, unfortunately. It only prevents nautilus from displaying the "run" or "read" dialog but doesn't change the executable flag. Is there any way I can fix this?

    Read the article

  • team viewer 8 beta wont run

    - by Conner Jones
    I installed team viewer 7 and then one of my friends using windows got version 8 so I installed the beta of version 8 for linux. When I try to run it for terminal I get these errors i atempted to do as the comment bellow said and when trying to run teamveiwer i stil got an error conner@DemonicGrace:~$ teamviewer Init... Checking setup... Launching TeamViewer... wine: cannot find L"C:\windows\system32\winemenubuilder.exe" err:wineboot:ProcessRunKeys Error running cmd L"C:\windows\system32\winemenubuilder.exe -a -r" (2) err:winedevice:ServiceMain driver L"MountMgr" failed to load err:secur32:SECUR32_initSchannelSP libgnutls not found, SSL connections will fail fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:ole:CoInitializeSecurity ((nil),-1,(nil),(nil),0,3,(nil),0,(nil)) - stub! fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:process:SetProcessShutdownParameters (00000100, 00000000): partial stub. fixme:resource:GetGuiResources (0xffffffff,0): stub fixme:win:EnumDisplayDevicesW ((null),0,0x32df64,0x00000000), stub! fixme:win:EnumDisplayDevicesW (L"\\.\DISPLAY1",0,0x32dc1c,0x00000000), stub! fixme:win:EnumDisplayDevicesW ((null),1,0x32df64,0x00000000), stub! please help me out if anyone has ideas im more than willing to listen

    Read the article

  • exception occurred for backend host '127.0.0.1/7001/7001': 'CONNECTION_REFUSED [os error=0, line 1715 of URL.cpp]: Error connecting to host 127.0.0.1:7001'

    - by Vijaya Moderator -Oracle
    When you hit the WLS Server via Proxy server, we may get the issue like below... [09/Jun/2014:06:05:50] failure ( 3284): for host 127.0.0.1 trying to GET /.../soundlink_wireless_​speaker/index.jsp, wl-proxy reports: exception occurred for backend host '127.0.0.1/7001/0': 'PROTOCOL_ERROR [line 835 of URL.cpp]: Backend Server not responding' [09/Jun/2014:06:05:50] failure ( 3284): for host 127.0.0.1 trying to GET /.../soundlink_wireless_​speaker/index.jsp, wl-proxy reports: exception occurred for backend host '127.0.0.1/7001/0': 'PROTOCOL_ERROR [line 835 of URL.cpp]: Backend Server not responding' [09/Jun/2014:06:05:51] failure ( 3284): for host 127.0.0.1 trying to GET /.../soundlink_wireless_​speaker/index.jsp, wl-proxy reports: exception occurred for backend host '127.0.0.1/7001/7001': 'CONNECTION_REFUSED [os error=0, line 1715 of URL.cpp]: Error connecting to host 127.0.0.1:7001' To solve the issue 1.  Check if there is any issue at the firewall by executing the below command     telnet 127.0.0.1 7001 2.  Also access  using the hostname instead of IP Address  If it errors out like below   Microsoft Telnet> o 127.0.0.1 7001 connecting to 127.0.0.1  Warning thrown at weblogic console: <BEA-000449> <Closing socket as no data read from it on 127.0.0.1:54,356 during the configured idle timeout of 5 secs>  Test the same again by disabling the firewall. Below are the steps... 1)Go to Start ----> Run  2)Type services.msc  3)In services.msc search for "Windows Firewall" and Right Click on "Windows Firewall"---->Select Stop. 4)Access the Weblogic URL  If the issue is resolved after disabling firewall, Request your  OS System Admin to open the port 7001 at the firewall end to access the application

    Read the article

  • Cryptswap boot error - can't mount?

    - by woody
    I believe i have my swap set up but am not sure because on start up it says that it is something along the lines of "could not mount /dev/mapper/cryptswap1 M for manual S for skip". But it appears to be mounted? I have already tried this solution with no success. When i run free -m the output is: total used free shared buffers cached Mem: 3887 769 3117 0 54 348 -/+ buffers/cache: 366 3520 Swap: 4026 0 4026 and sudo bklid is: /dev/sda1: UUID="9fb3ccd6-3732-4989-bfa4-e943a09f1153" TYPE="ext4" /dev/mapper/cryptswap1: UUID="bd9fe154-8621-48b3-95d2-ae5c91f373fd" TYPE="swap" and cat /etc/crypttab is: cryptswap1 /dev/sda5 /dev/urandom swap,cipher=aes-cbc-essiv:sha256 my /etc/fstab is: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=9fb3ccd6-3732-4989-bfa4-e943a09f1153 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation #UUID=bb0e378e-8742-435a-beda-ae7788a7c1b0 none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 cat /proc/swaps output is: Filename Type Size Used Priority /dev/dm-0 partition 4123644 0 -1 Is my swap not setup correctly or how can i fix my boot message?

    Read the article

  • Connected to wireless, but no internet access

    - by boogaloo
    After installing Ubuntu 12.04 a week ago wireless internet had been working fine. It stopped working yesterday, however, and I'm at a loss for what to do even after scouring replies to similar posted problems. I have tried using Google's public DNS and turning off proxy settings on Firefox. I have used nm-tool and lshw to make sure my wireless device and driver are connected. If anyone can help me resolve this issue I would be extremely grateful! @kregerjd $ ping -c3 www.google.com ping: unknown host www.google.com @Alaa: $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 wlan0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 192.168.1.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 $ ping -c4 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data From 192.168.1.104 icmp_seq=1 Destination Host Unavailable From 192.168.1.104 icmp_seq=2 Destination Host Unavailable --- 192.168.1.1 ping statistics --- 4 packets transmitted, 0 received, +2 errors, 100% packet loss, time 2998ms pipe 4

    Read the article

  • Select (loop) or command not working in shell-script

    - by user208098
    I've been tinkering with Linux and Unix for years but still a novice in my mind and recently find myself trying to be more pro with it as I work in IT. So with that notion I'm studying shell scripting. I've hit a snag in ubuntu using the latest version 13.10 Saucy. When I use the select command in a sh script it doesn't work, depending on how I format the command it will either return Unexpected "do" or Unexpected "done". See the following two examples: This section of code produces an unexpected "do" error: #/bin/bash PS3='Please enter your choice' select opt in option1 option2 option3 quit do case $opt in "option1") echo "you chose choice 1" ;; "option2") echo "you chose choice 2" ;; "option3") echo "you chose choice 3" ;; "quit") break ;; *) echo invalid option ;; esac done This section of code produces an unexpected "done" error. #/bin/bash PS3='Please enter your choice' select opt in option1 option2 option3 quit ; do case $opt in "Option1") echo "you chose choice 1" ;; "Option2") echo "you chose choice 2" ;; "Option3") echo "you chose choice 3" ;; "quit") break ;; *) echo invalid option ;; esac done When I enter these parameters into the command line interactively or manually I get the desired result which is a list of choices to choose from. However when executed from a script I get the before mentioned errors. Also a side note I have tried this in Fedora as a script and it worked perfectly so my question is why isn't it working in Ubuntu, is this a difference between RHL and Debian? Or is it a bug in the latest version of Ubuntu? Thanks in advance for any help! KG

    Read the article

  • Install on Acer Aspire 4752

    - by user216962
    I am at my wits end with this computer. I bought and Acer Aspire 4752 with a fully loaded version of Windows 7 on it. I prefer Ubuntu so I began to install 14.04 from USB. Got the error: [Errno 5] Input/output error This is often due to a faulty CD/DVD disk or drive, or a faulty hard disk. It may help to clean the CD/DVD, to burn the CD/DVD at a lower speed, to clean the CD/DVD drive lens (cleaning kits are often available from electronics suppliers), to check whether the hard disk is old and in need of replacement, or to move the system to a cooler environment. So I tried a different USB stick, same error. Tried different versions of Ubuntu, got the same error. I've used startup disk creator and Unetbootin to make start USB boot devices. I can boot with the USB drive and run Ubuntu that way. I even checked the hard drive using the tools in Ubuntu. Everything was fine, except it said the hard drive was hot. I tried a different hard drive. Got same error above. I ran a test with mem86, everything was fine. No matter what I do, using the USB gives me the Errno5 error. I then switched to using DVDs. Now I keep getting an uncompression error when installing Ubuntu 14.04 or 12.04. I can't figure out for the life of me why I get nothing but errors. Can anyone help?

    Read the article

  • when I type apt-get -f install, I get the error message

    - by gene
    xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. Also I can not upgrade my software, It said that the package system is broken, with detail information: The following packages have unmet dependencies: xserver-xorg-core: Depends: xserver-common (>= 2:1.11.4-0ubuntu10.8) but 2:1.11.4-0ubuntu10.8 is installed when I issue sudo apt-get update, the output seems fine the source is(sorry the output has too many links that I can not post in);http://archive.ubuntu.com Reading package lists... Done ====================== when I issue sudo apt-get dist-upgrade, the output is: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: xserver-xorg-core : Breaks: xserver-xorg-video-5 E: Unmet dependencies. Try using -f. ================== when I issue 'sudo apt-get -f install', the output is: dpkg: dependency problems prevent configuration of xserver-xorg-video-radeon: xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. xserver-xorg-video-radeon (1:6.12.1-0ubuntu2) provides xserver-xorg-video-5. dpkg: error processing xserver-xorg-video-radeon (--configure):dependency problems leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: xserver-xorg-video-radeon E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Perl script rendered in browser as code through symlink - fine when accessed directly

    - by John Dittmar
    I have a Rails 4 app that has some views that post to Perl cgi scripts. The perl scripts are accessed via a symbolic link to a folder called "cgi-bin". When I navigate to a perl script through the symbolic link they are rendered as text instead of executed (ie: localhost:3000/cgi-bin/test.cgi), however when I access them directly they execute without issue (ie. localhost/path/to/cgi-bin/test.cgi). I am using apache2 on os x. In the directory localhost/path/to/ I have an .htaccess file that contains the following: # General Apache options AddHandler fastcgi-script .fcgi AddHandler cgi-script .cgi Options +FollowSymLinks +ExecCGI I have the exact same lines in the .htaccess file that I have in localhost:3000/ I have also uncommented the AllowOverride all in httpd.conf. The are no errors in apache's error log. When I access the direct link to test.cgi a new line is appended to apache's access log, when I access the script through the symbolic link (and it is rendered as text), there is no line appended to the access log. Any idea why this error occurs? This setup worked fine in a previous version of rails of OS X, but recently I upgraded to Mavericks and figured I should update the Rails application to v4.0 as well.

    Read the article

  • Password protect an alias virtual difrecory

    - by Jason
    I have a main domain being hosted through CPanel. I also have a sub-domain that I would like to appear as a path under the main domain instead of as a sub-domain. So I have: http://example.com/ pointing to the main hosted file. http://example.com/mydir pointing to the subdomain files. This is achieved by a httpd.conf include from the main domain section to set an alias: alias /mydir /path/to/subdomain/files/ Now, that works fine so far. The problem is that if a .htaccess file under /path/to/the/subdomain/files/ contains an error, the alias is completely skipped, and /mydir goes instead to the main host files. That is kind of surprising to me - I would expect an error to return an error instead. Now the killer: if I try to password protect /path/to/subdomain/files/, then trying to access http://example.com/mydir will again attempt to deliver from under the main hosted files and not from /path/to/subdomain/files/ I am not seeing any errors reported on the .htaccess file in the apache error log, so I am assuming the .htaccess is valid: AuthUserFile /path/to/valid/readable/.htpasswd AuthName "Secure Access" AuthType Basic Require valid-user This kind of behaviour does not seem right to me. Is there something obvious that could be causing it? Or is this just the way it works? Perhaps using an alias is the wrong way to go?

    Read the article

  • Apache virtual hosts - Resources on website not loaded when accessed from other hostname than localhost

    - by Christian Stadegaart
    Running virtual hosts on Mac OS X 10.6.8 running Apache 2.2.22. /etc/hosts is as follows: 127.0.0.1 localhost 3dweergave studio-12.fritz.box 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Virtual hosts configuration: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot "/opt/local/www/3dweergave" ServerName 3dweergave ErrorLog "logs/3dweergave-error_log" CustomLog "logs/3dweergave-access_log" common <Directory "/opt/local/www/3dweergave"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:80> ServerName main </VirtualHost> This will output the following settings: *:80 is a NameVirtualHost default server 3dweergave (/opt/local/apache2/conf/extra/httpd-vhosts.conf:21) port 80 namevhost 3dweergave (/opt/local/apache2/conf/extra/httpd-vhosts.conf:21) port 80 namevhost main (/opt/local/apache2/conf/extra/httpd-vhosts.conf:34) I made 3dweergave the default server by putting it first in the list. This will cause all undefined virtual hosts' names to load 3dweergave, and thus http://localhost will point to 3dweergave. Of course, normally, the first in the list is the virtual host main and localhost will point to main, but for testing purposes I switched them. When I navigate to http://localhost, my CakePHP default homepage shows as expected: Screenshot 1 But when I navigate to http://3dweergave, my CakePHP default homepage doesn't show as expected. It looks like every relative link to resources are not accepted by the server: Screenshot 2 For example, the CSS isn't loaded. When I open the source and click on the link, it opens the CSS file in the browser without errors. But when I run FireBug while loading the webpage, it seems that the CSS file isn't retrieved. (<link rel="stylesheet" type="text/css" href="/css/cake.generic.css" />) How can I fix this unwanted behaviour?

    Read the article

  • Ubuntu 12.04 Frequent Kernel Panics

    - by Jordan Johns
    Dealing with what seems to be quantum behavior here. I'm having frequent Kernel panics and the problem is very hard to locate. To cut to the chase of what is going on I am essentially getting this screen: This occurs always after login (login screen is fine, can go into my TTY's and everything). But once I log in I have anywhere from 0 (instant) to 10 seconds before everything grinds to a halt and freezes. The only way I found out that it was a kernel panic was going into a TTY while it happens. At first, I figured something must be wrong with memory. Ran memtest86, full standard test. No errors. OK. CPU problem? Windows works fine, ran prime95 and a bunch of other torture tests for many hours. Drive problem? Nope. fsck reports no problems and the smart status is ok. Overclocking problem? Nope. Windows is working fine, and I also reset the bios to prove the point it was not the cause. At a loss here, even trying to reinstall the thing it gives me the same problem even attempting to install ubuntu (like right after you try to select "install ubuntu" or "try ubuntu first before installing"). Anyone have any thoughts? Its very bizarre, its as if my system hates linux and is trying to tell me not to use it (that would be a terrible reality). Thanks!

    Read the article

  • E: Sub-process /usr/bin/dpkg returned an error code (1)

    - by kss
    sudo apt-get install acroread, i got the following output Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: libldap2 libgnome-speech7 The following NEW packages will be installed: acroread 0 upgraded, 1 newly installed, 0 to remove and 6 not upgraded. 1 not fully installed or removed. Need to get 0 B/60.1 MB of archives. After this operation, 142 MB of additional disk space will be used. (Reading database ... 237901 files and directories currently installed.) Unpacking acroread (from .../acroread_9.5.1-1precise1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb (--unpack): failed in write on buffer copy for backend dpkg-deb during `./opt/Adobe/Reader9/Browser/intellinux/nppdf.so': No space left on device No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for man-db ... /usr/bin/mandb: can't write to /var/cache/man/1645: No space left on device Errors were encountered while processing: /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Compiling custom kernel 3.7.x lowlatency on Ubuntu 12.04

    - by FlabbergastedPickle
    All, I have a peculiar problem with trying to compile a lowlatency flavor of the latest 3.7 kernel. I retrieved the prepatched source from the launchpad using bzr, compiled it using the usual make-kpkg using the current config file plus default options for the rest, installed the kernel and booted into it. Everything works except for the fglrx and wl drivers that I had to install in the original 12.04 lowlatency kernel. So, I tried recompiling these and succeeded with both of them (no errors were reported)--wl driver required a minor adjustment to system.h include while latest fglrx 12.11 beta11 (released yesterday, Dec. 3rd, 2012) compiled without the hitch. Yet, when I try to modprobe either module (both having in common the fact that they were built after the kernel, fglrx as a deb, and wl via the usual make/make install), I get "FATAL: no MODULENAME module found" (MODULENAME being either wl or fglrx). The graphic driver watermark shows 3D crossed out and "for testing purposes" (or "unsupported hardware," can't remember), and no fglrx or wl is loaded. More mysteriously, dmesg shows no attempt on kernel's behalf to load the said drivers, even though they are clearly in the right /lib/modules/KERNEL_VERSION folder. How is this possible? Has something fundamentally changed in 3.7 kernel that would prevent modprobing of these? I know that there is driver signing option that was merged recently but as far as I could tell the kernel config file generated by the build process had that disabled. OTOH, while building wl driver, I did get a warning that the driver was not signed... Then again, even if the kernel disallowed loading of those modules, shouldn't dmesg reflect that? Any thoughts on this one are most appreciated.

    Read the article

  • How long does it take to delete a Google Apps for Domain account and allow recreation?

    - by Wil
    Basically, I set up a Google Apps for domain account for one of my clients but upon trying to upload a CSV of users, only half were created and then I kept getting time outs, 404's and other errors which I never saw when setting up another client. I was not sure how, but thought I may have caused an error or there was an error Google Side when the account was created so I thought it may be best to delete the domain and start from scratch. Little did I know, you had to wait to recreate the domain! As far as I can tell from what I have read, there is five days to delete and allow recreation of similar user names, however, I can't see anything that shows how long you have to wait for the domain account. It has now been 5/6 days and I can't recreate it. The cancellation email I got shows "...If at any time you would like to sign-up for Google Apps for this or another domain, you can do so by visiting..." But it does not mention how long!!! I tried emailing Google directly but got no response. Does anyone know?

    Read the article

  • Dealing with inflexible programmers.

    - by Singleton
    Sometimes programmers who work on a project for long time get inflexible, and it becomes difficult to reason with them. Even if we do manage to convince them, they can be unlikely to implement our suggestions. For instance, I recently joined a project where the build & release process is too complicated and has unnecessary roadblocks. I suggested that we get rid of some of the development overhead (like filling a few spreadsheets) just by integrating defect management and version control tools (both are IBM-Rational tools so integration can be a very easy one-off effort). Also, if we use tools like Maven & Ant (the project involves Java and some COTS products) build & release can be simplified which should reduce manual errors & intervention. I managed to convince others and I'm ready to put in the effort to develop a proof of concept. But the ‘Senior’ developer is not willing, possibly because the current process makes him more valuable. How do we handle this situation without developing friction in the team?

    Read the article

  • How would I batch rename a lot of files using command-line?

    - by Whisperity
    I have a problem which I am unable to solve: I need to rename a great dump of files using patterns. I tried using this, but I always get an error. I have a folder, inside with a lot of files. Running ls -1 | wc -l, it returns that I have like 160000 files inside. The problem is, that I wish to move these files to a Windows system, but most of them have characters like : and ? in them, which makes the file unaccessible on said Windows-based systems. (As a "do not solve but deal with" method, I tried booting up a LiveCD on the Windows system and moving the files using the live OS. Under that Ubuntu, the files were readable and writable on the mounted NTFS partition, but when I booted back on Windows, it showed that the file is there but Windows was unable to access it in any fashion: rename, delete or open.) I tried running rename 's/\:/_' * inside the folder, but I got Argument list too long error. Some search revealed that it happens because I have so many files, and then I arrived here. The problem is that I don't know how to alter the command to suit my needs, as I always end up having various errors like Trying find -name '*:*' | xargs rename : _, it gives xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option [\n] syntax error at (eval 1) line 1, near ":" [\n] xargs: rename: exited with status 255; aborting Adding the -0 after xargs turns the error message to xargs: argument line too long These files are archive files generated by various PHP scripts. The best solution would be having a chance to rename them before they are moved to Windows, but if there is no way to do it, we might have a way to rename the files while they are moved to Windows. I use samba and proftpd to move the files. Unfortunately, graphical software are out of the question as the server containing the files is what it is, a server, with only command-line interface.

    Read the article

  • New site not appearing in index after change of address, no feedback from google webmaster tools

    - by Duffy
    Our change of address seems to not be taking effect. Here's the story so far: We're a web company and our product is called The New Hive. Our site used to be at thenewhive.com, but we decided to switch to newhive.com (drop the "the", it's cleaner). So the timeline of what I've tried, starting on July 29th: used 301 redirects for all pages (e.g. thenewhive.com/tag/art = newhive.com/tag/art) At this point we noticed that we had disappeared from search results when searching "The New Hive", the front page used to be all links to our site plus a couple news articles about the company. So on August 5th I: verified new domain in webmaster tools (old domain was already verified) submitted a change of address request on August 5th with Webmaster Tools / Configuration / Change of Address Then after another week, on August 13th I did this: Went to Webmaster Tools / Health / Fetch as google fetched our homepage and a couple sub pages, all successfully clicked "Submit to Index" for homepage As of today (August 23rd) we're still not showing up in the index. We're getting no warnings or feedback of any kind from the dashboard so I'm inclined to think something's broken with the dashboard rather than that something's wrong with our site from an SEO perspective. From the dashboard: No new messages or recent critical issues. Crawl Errors: No data available. From Health - Index Status: Total indexed 0 Ever crawled 42,490 Not selected 12 Blocked by robots 0 I'm really at a loss here, any help would be appreciated.

    Read the article

  • Build vs Rebuild

    - by prash
    Build means compile and link only the source files that have changed since the last build, while Rebuild means compile and link all source files regardless of whether they changed or not. Build is the normal thing to do and is faster. Sometimes the versions of project target components can get out of sync and rebuild is necessary to make the build successful. In practice, you never need to Clean. Build or Rebuild Solution builds or rebuilds all projects in the your solution, while Build or Rebuild <project name> builds or rebuilds the StartUp project. To set the StartUp project, right click on the desired project name in the Solution Explorer tab and select Set as StartUp project. The project name now appears in bold. Compile just compiles the source file currently being edited. Useful to quickly check for errors when the rest of your source files are in an incomplete state that would prevent a successful build of the entire project. Ctrl-F7 is the shortcut key for Compile. All source files that have changed are saved when you request a build/rebuild, so you don't have to save them first. When you run your executable (F5 or Ctrl-F5), Visual Studio saves all your changed source files and builds anything that changed, so you don't need to explicitly do those steps every time. This allows for quick "trial and error" debugging. Incidentally, if you like those little Visual Studio keyboard shortcuts, you can download posters of the C# and the VB.Net ones, respectively (I am personally a big fan of using keyboard shortcuts :) ).   Visual Studio 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=92ced922-d505-457a-8c9c-84036160639f   Visual Studio 2005 C#: http://www.microsoft.com/downloads/details.aspx?FamilyID=c15d210d-a926-46a8-a586-31f8a2e576fe&DisplayLang=en VB.NET: http://www.microsoft.com/downloads/details.aspx?FamilyID=6bb41456-9378-4746-b502-b4c5f7182203&DisplayLang=en

    Read the article

  • Implementing the NetBeans Project API on Maven in IntelliJ IDEA

    - by Geertjan
    James McGivern, one of the speakers I met at JAX London, is creating media software on the NetBeans Platform. However, he's using Maven and IntelliJ IDEA and one of the features he needs is project support, i.e., the project infrastructure that's part of NetBeans IDE. The two documents that describe the NetBeans Project API are these: http://platform.netbeans.org/tutorials/nbm-projecttype.html http://netbeans.dzone.com/how-create-maven-nb-project-type By combining the above two, you'll understand how to create a project infrastructure on top of the NetBeans Platform with Maven. However, an additional step of complexity is added when IntelliJ IDEA is included into the mix and therefore I created the following screencast which, in 15 minutes, puts all the pieces together. Be aware that I'm probably not using IntelliJ IDEA and Maven as optimally as I could and I'm publishing this at least partly so that the errors of my ways can be pointed out to me. But, first and foremost, this is especially for you James:  Note: Intentionally no sound, only callouts explaining what I'm doing. You'll probably need to pause the movie here and there to absorb the text; for details on the text, see the two links referred to above.

    Read the article

  • Testing of visualization projects

    - by paxRoman
    We develop small to large visualization projects for different tasks and industries and sometimes while rewriting them a couple of times in the process we hit walls because we discover that we need to add a lot of code to support new requirements. Now we have established a design process that seems to work well (at least we reduced the development time for each new project quite a bit), but we're still left scratching our heads around this question: what exactly should we test when testing visualizations? If everything that we want to explore is on the screen (bounded visualizations)? If the data is ok - if data is valid (that's one of the nice things about visualizations you can spot errors in your datasets)? Usability? User interaction? Code quality? I can tell you for sure that a simple check of the code quality is certainly not enough! Is there a classic paper / book about how to test visualizations? Also do you happen to know about classic design patterns for visualizations (except the obvious ones like Pub-Sub)?

    Read the article

  • Commands don't have permission when using absplute path

    - by Markos
    I have folders set up this way: /srv/samba/video getfacl /srv/samba/video # file: srv/samba/video # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx group:deluge:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:group:deluge:rwx default:mask::rwx default:other::--- That means, user deluge has rwx to folder /srv/samba/video. However, when running command as user deluge, I am getting weird permission errors. When in folder /srv/samba/video: sudo -u deluge mkdir foo works flawlessly. But when using absolute path: sudo -u deluge mkdir /srv/samba/video/foo I am getting permission denied. When running sudo -u deluge id, I get output uid=113(deluge) gid=124(deluge) skupiny=124(deluge) which shows that user deluge is indeed in group deluge. Also, the behavior was the same when I gave the permissions also to user deluge not just group deluge. When executing as non-system user, it does work. The reason that I want to use absolute paths is that I am using automatically triggered post-download script which extracts some files into the folder. I have spent way too many hours to solve this problem myself. mkdir isn't the only command that fails, touch is doing the same thing, so I suspect that it's not mkdir's fault. If you need more info, I will try to put it in here, just ask. Thanx in advance.

    Read the article

  • OpenGL doesn't draw (3.3+) [on hold]

    - by Dhiego Magalhães
    Brief: I've been following this tutorial about OpenGL for 2 days, and I still can't have a triangle drawn, so I'm asking for help here. The tutorial is turned to OpenGL version 3.3 programing, using vertex arrays, buffers, etc. The libraries are: GLFW3 and GLEW, and I setted them by myself. The screen keeps black all the time. Full code: link here (It's just like a Hello World opengl program) Further Details: I get no errors at all. I downloaded a software to test my video card, and it supports OpenGL 4.1+ Standard OpenGL code for drawing (from earlier version) such as this one works normally. I'm using Microsoft Visual Studio 10.0 I presume all the OpenGL implementation was dune right: I added Additional Dependences to the linker as glew32.lib, opengl32.lib, glfw3.lib. The glew.dll was placed at SysWOW64 - because I'm running window 64bits, and glew is 32. Notes: I've been working hard to find out what this is, but I can't find. I would appreciate if anyone could test this code for me, so I can know if I implemented something wrong, and that its not my code.

    Read the article

  • I am looking to create realistic car movement using vectors

    - by bobthemac
    I have goggled how to do this and found this http://www.helixsoft.nl/articles/circle/sincos.htm I have had a go at it but most of the functions that were showed didn't work I just got errors because they didn't exist. I have looked at the cos and sin functions but don't understand how to use them or how to get the car movement working correctly using vectors. I have no code because I am not sure what to do sorry. Any help is appreciated. EDIT: I have restrictions that I must use the TL engine for my game, I am not allowed to add any sort of physics engine. It must be programmed in c++. Here is a sample of what i got from trying to follow what was done in the link I provided. if(myEngine->KeyHeld(Key_W)) { length += carSpeedIncrement; } if(myEngine->KeyHeld(Key_S)) { length -= carSpeedIncrement; } if(myEngine->KeyHeld(Key_A)) { angle -= carSpeedIncrement; } if(myEngine->KeyHeld(Key_D)) { angle += carSpeedIncrement; } carVolocityX = cos(angle); carVolocityZ = cos(angle); car->MoveX(carVolocityX * frameTime); car->MoveZ(carVolocityZ * frameTime);

    Read the article

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >