Search Results

Search found 42453 results on 1699 pages for 'page description'.

Page 380/1699 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Hosting several HTTP servers on single domain name

    - by Nakilon
    Several people have got a single domain name server.company.com server, where they are now supposed to host their infrastructure or temporal projects, written in different ways even in different programming languages. How do they divide the domain? Split into subdomains: john.server.company.com, kate.server.company.com, etc. This would need a lot of admins' assistance, time, etc. -- there would be no way for John and Kate to do it themselves. Split into url namespaces: server.company.com/john/, server.company.com/kate/, etc. Pro: They now can make a single welcome page at root with any additional info (if they need?) Con: Each server would need to know their namespace string constant, and hrefs like / whould need patching. Split into ports: server.company.com:8080, server.company.com:8081, etc. and make a single :80 welcome page. Pro: They still can make a single welcome page at :80 Con: ??? I would like to know more pros and cons for 2 and 3 solution.

    Read the article

  • CodePlex learns to talk to other services!

    CodePlex is now able to talk to other services! For example, if you want CodePlex to tell Trello to update cards on your Trello board, it can do it. Or if you want CodePlex to notify your Campfire chat room when updates are pushed, it can do that too. To start off, we are going to be adding support for the following services: Campfire – Notify a Campfire chat room when commits occur HipChat – Notify a HipChat chat room when commits occur Trello – Add commit summaries to Trello cards by referencing those cards in commit messages Twitter – Notify your Twitter followers when updates are pushed to your project In addition, we will continue to support our existing integrations with Windows Azure – Continuously deploy to Windows Azure on pushes (For Git and Hg projects) AppHarbor – Continuously deploy to AppHarbor on pushes To set up these integrations for your project, navigate to the project settings page as a project coordinator, and click on the services section as seen below:   While we are starting with these six services, the infrastructure is now in place to allow us to quickly roll out new integrations. We would love to hear which services and integrations you would like to see most on our suggestions page. We realize that there are some services and URLs that only make sense for your project to send notifications to. To support this scenario, we plan to add generic web hooks in the near future. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @Rick_Marron.    

    Read the article

  • wireless is disabled by hardware lenovo 3000g430

    - by sudheer
    sir i have problem with my wifi switch sir please tell me solution for my problem (wifi is disabled by hardware). output of sudo lshw -C network is sudo] password for sudheer: *-network DISABLED description: Wireless interface product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:06:00.0 logical name: eth2 version: 01 serial: 00:21:00:72:3a:93 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11bg resources: irq:19 memory:f4700000-f4703fff *-network description: Ethernet interface product: NetLink BCM5906M Fast Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 02 serial: 00:1e:68:ad:24:0b size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=sb v3.04 ip=172.16.52.79 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:47 memory:f4600000-f460ffff output of iwconfig is lo no wireless extensions. eth2 IEEE 802.11 Access Point: Not-Associated Link Quality:5 Signal level:0 Noise level:0 Rx invalid nwid:0 invalid crypt:0 invalid misc:0 eth0 no wireless extensions. sudheer@sudheer:~$ sudo iwlistscanning sudo: iwlistscanning: command not found ***sudheer@sudheer:~$ sudo iwlist scanning*** lo Interface doesn't support scanning. eth2 Failed to read scan data : Invalid argument eth0 Interface doesn't support scanning.

    Read the article

  • Recovering from an incorrectly deployed robots.txt?

    - by Doug T.
    We accidentally deployed a robots.txt from our development site that disallowed all crawling. This has caused traffic to dip dramatically, and google results to report: A description for this result is not available because of this site's robots.txt – learn more. We've since corrected the robots.txt about a 1.5 weeks ago, and you can see our robots.txt here. However, search results still report the same robots.txt message. The same appears to be true for Bing. We've taken the following action: Submitted site to be recrawled through google webmaster tools Submitted a site map to google (basically doing everything possible to say "Hey we're here! and we're crawlable!") Indeed a lot of crawl activity seems to be happening lately, but still no description is crawled. I noticed this question where the problem was specific to a 303 redirect back to a disallowed path. We are 301 redirecting to /blog, but crawling is allowed here. This redirect is due to a site redesign, wordpress paths for posts such as /2012/02/12/yadda yadda have been moved to /blog/2012/02/12. We 301 redirect to wordpress for /blog to keep our google juice. However, the sitemap we submitted might have /blog URLs. I'm not sure how much this matters. We clearly want to preserve google juice for URLs linked to us from before our redesign with the /2012/02/... URLs. So perhaps this has prevented some content from getting recrawled? How can we get all of our content, with links pointed to our site from pre-and-post redesign reporting descriptions? How can we resolve this problem and get our search traffic back to where it used to be?

    Read the article

  • Wireless doesn't work after installing 11.10

    - by Ingram
    I just did a fresh install of 11.10 32 bit and I can't get my wireless to work. I installed the drivers the Broadcom STA wireless drivers through additional drivers and rebooted, but it still doesn't see any wireless networks Did something change in 11.10 that makes the wireless card not work anymore? I was using 10.10 before, and it worked fine. Do I need to go back to 10.10? sudo lshw -C network [sudo] password for user1: *-network description: Ethernet interface product: NetLink BCM5784M Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 10 serial: 00:1f:16:be:55:ff size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 duplex=full firmware=sb v2.19 ip=192.168.0.70 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:43 memory:f0300000-f030ffff *-network UNCLAIMED description: Network controller product: BCM4311 802.11b/g WLAN vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:09:00.0 version: 01 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f0400000-f0403fff

    Read the article

  • Multilingual website without language component in the URL

    - by user359650
    I'm working on a website for Canada which will have French and English versions. For SEO purposes, I would like to avoid using any language tag in URLs because I believe it will have more impact (e.g. example.ca/products better than en.example.ca/products or example.ca/en/products). I believe this is technically possible because the2 languages are sufficiently different that the URLs won't be conflicting with one another (e.g. if you want a "product" page, it will be /products in English, and /produits in French so you know which language the URL is about). Since Google (and most likely others) doesn't rely on the URL (nor HTML tags) to determine the content language I don't see any problems with search engines. To make this possible I've thought about using a cookie distinct from the session cookie (e.g. example.org_language) with long term expiry (e.g. N years) that will memorize the language chosen by the user. That way when people visit the website with a new browser session, they get served the proper language. I have already given up on users being able to switch one page from English to French: when people will chose English or French from the menu they will be redirected to the corresponding version of the home page. Do you foresee any problems with not using a language component in the URL (whether domain or path)? (as long as one makes sure URLS don't conflict).

    Read the article

  • Stylecop 4.7.37.0 has been released

    - by TATWORTH
    Stylecop  4.7.37.0 has been released at http://stylecop.codeplex.com/releases/view/79972The release notes follow:Add docs for new SA1650 spelling rule.Fix for 7395. Dont remove parenthesis around await expressions.Insert a returns element into docs within a see element.Update our tools folder StyleCop dll'sfix for 7392. Insert generic type docs for return types correctly.Fix for 7393. Allow documentation elements with attributes to end the string and still be valid.Make sure the MSBuild Task logs the warning id and type of exception. Unless the description field holds all this info VS cannot show the text in the Error List.Load custom dictionaries for multiple cultures. For a culture like en-GB; we load CustomDictionary.xml, then look for CustomDictionary.en-GB.xml and then CustomDictionary.en.xmlUpdate standard shipping dictionaries.Element documentation spelling fixes.Reduce the standard dictionaryUpdate our own devbuild StyleCop checks.Don't check spelling of xml documentation attributes are anything inside  <c> or <code> elements.Update StylingStyling update.Add timestamps for all the dependant files into the StyleCopResults.cache. Add a FileSystemWatcher to all custom dictionary files.Write out the full violation into the StyleCopResults.cache.Change a rules description text.Styling fixes.Styling fixes.NEW RULE: Check Spelling Of Element Documetation. Fix over 2000 spelling errors in our source code. Update the VS addin to show the rule violation in more detail. Add spelling checker to the deployment.Set our own Culture to en-USDocumentation spelling fixes.First draft of the documentation spelling checker.Fix for 7325. Don't throw 1126 in goto statements.Fix for 7090. Add TargetsDir to registry during install.Fix for 7060. Sort usings after moving them inside namespace.Fix FxCop issues.Fix for 7389. Detect CpuCount on Unix/MACFix for 6788. Allow opening curly brackets for scope. Added new tests.Updating constants.Fix for 7167. Show version number of StyleCop in VS Help window.Only output StyleCop excluded files if there are any.

    Read the article

  • Unable to mount an LVM Hard-drive after upgrade

    - by Bruce Staples
    I imagine this is a basic gotcha ... but I can't see it. I have a system with 2(physical) harddrives. The boot system (/dev/sda) was running 10.04 & the second drive (/dev/sdb) was just a mounted filesystem. I did a clean load of Ubuntu 12.04 overwriting /dev/sda (not an upgrade) & now cannot mount the second drive. so I do not know what to enter it into the fstab ... I had expected to use: /dev/sdb /tera ext4 defaults 0 2 But even manual mounting fails (I also have tried various "-t" options on the off chance!) sudo mount -t ext4 /dev/sdb1 /tera mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Output from disk queries indicate that it is a Linux LVM & a healthy disk still. sudo lshw -C disk *-disk:0 description: ATA Disk product: WDC WD5000AACS-0 vendor: Western Digital physical id: 0 bus info: scsi@2:0.0.0 logical name: /dev/sda version: 01.0 serial: WD-WCASU1401098 size: 465GiB (500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00015a55 *-disk:1 description: ATA Disk product: WDC WD10EADS-00L vendor: Western Digital physical id: 1 bus info: scsi@3:0.0.0 logical name: /dev/sdb version: 01.0 serial: WD-WCAU47836304 size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 sudo fdisk -l Disk /dev/sda: 500.1 GB, 500106780160 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976771055 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00015a55 Device Boot Start End Blocks Id System /dev/sda1 * 2048 972580863 486289408 83 Linux /dev/sda2 972582910 976769023 2093057 5 Extended /dev/sda5 972582912 976769023 2093056 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 1953525167 976762583+ 8e Linux LVM LVM doesn't appear to be an option for mount or fstab. ... and here's a Smart data Screenshot from Disk Utility.

    Read the article

  • Oracle OpenWorld 2012 is Around the Corner - Discover AutoVue Activities

    - by Pam Petropoulos
    Planning to attend Oracle OpenWorld 2012?  If so, be sure to check out the various AutoVue Enterprise Visualization activities that you can take advantage of while in San Francisco. AutoVue Sessions: CON8381 - Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization   Click here for full session description. Date: Monday, October 1, 2012 Time: 10:45 a.m. – 11:45 a.m. Location: Intercontinental Hotel - Telegraph Hill Customer Speaker: Siew Yeow Loye, Global Foundries   CON8385 - Optimize Asset Performance and Reliability with AutoVue Visualization   Click here for full session description. Date:Thursday, October 4, 2012 Time: 2:15 p.m. – 3:15 p.m. Location: Palace Hotel – Gold Ballroom AutoVue Demo Pods: Demo   Demo ID: 3122   AutoVue: PLM & Enterprise Visualization Moscone West; Workstation: W-082 Demo ID: 3001  Oracle E-Business Suite Enterprise Asset Management and AutoVue Visualization Solutions Palace Hotel Level 2 HPU-008   Customers are also invited to attend the Oracle OpenWorld 2012 Supply Chain Management Customer Reception on Tuesday, October 2, 2012. This year's event is being held at ROE Lounge, located just 2 blocks from Moscone Center, and offers a casual and upbeat atmosphere so you can mix and mingle with friends and colleagues. This event sold out last year and space is again limited so Register Today. Date: Tuesday, October 2, 2012 Time: 6:00 p.m. – 8:00 p.m. Location: ROE Lounge, 651 Howard Street, San Francisco   For additional information regarding AutoVue sessions, demos, and activities be sure to review the AutoVue FocusOn Document.   Join us at Oracle OpenWorld, September 30–October 4, 2012 and discover new products, solutions, and practices to make you even more successful in your job and in your industry.

    Read the article

  • Ubuntu 12.10 with kernel 3.5.0-18

    - by Chaitanya
    I had ubuntu 12.04 with kernel 3.0.2. Today I have updated my system and got 12.10 with kernel 3.5.0-18. Now when I boot my machine with 3.5 kernel, it starts until the page where I enter my password. Within seconds, I get a page with looooong list of some commands or list. I cant take screenshot of that. It looks something like, [1.2234978942837]kjsahfa;lsfksld;fkjsf;owieurwirejw/rnw;erkjwelrjw2309480432 [1.3294823498230948]as;lfjsf;iuwrijrwjlkerjw;rekwer;lkwjre;lkjRIJWEORIWE'JJA; something like this. Luckily, in my boot page, I have 3.0.2 kernel also. When I boot with 3.0.2 kernel there is no problem. But when I boot with 3.5.0, it throws that wierd error. I wont be able to do anything at that time. None of the keys work. I have to forcibly shutdown the machine and restart with 3.0.2 kernel. Please help..... Thanks, Chaitanya.

    Read the article

  • SEO - different data with same title and keywords

    - by Junaid Saeed
    here is my scenario i have a website where i redirect my users basing upon the device they were using, lets say a user is visiting from an iPad, i take him directly to the page of iPad wallpapers, the user selects iPad version & i take the user to the gallery of wallpapers where the user can select & download any wallpaper. Every wallpaper is the required resolution, i have my reasons for doing this, now the thing is there are diff. resolution. versions of an image appearing one 5 diff. sections of my website, each having their own view page Now there is only one record in db.table for the image, and basing on the my consistent naming convention of the images, i pick the required image. this means when 5 different pages are generated in 5 categorized sections of the website, due to a shared DB record, the keywords, the titles and every single detail of the 5 pages is same besides the resolution of the image, and the section specific details that the page has and yeah the pages also have different paths like wallpapers.com\ipad-1\cars\Ferrari-dino.html wallpapers.com\ipad-2\cars\Ferrari-dino.html wallpapers.com\ipad-3\cars\Ferrari-dino.html wallpapers.com\ipad-4\cars\Ferrari-dino.html wallpapers.com\ipad-5\cars\Ferrari-dino.html now this is my scenario, How do Search Engines see it and how do they rank it? Is it a Good or Normal or Bad SEO practice? If bad how dangerous it is for my sites SEO? i need your comments on my scenario.

    Read the article

  • Help parsing long (3.5mil lines) text file, line by line and storing data, need a strategy

    - by Jarrod
    This is a question about solving a particular problem I am struggling with, I am parsing a long list of text data, line by line for a business app in PHP (cron script on the CLI). The file follows the format: HD: Some text here {text here too} DC: A description here DC: the description continues here DC: and it ends here. DT: 2012-08-01 HD: Next header here {supplemental text} ... this repeats over and over for a few hundred megs I have to read each line, parse out the HD: line and grab the text on this line. I then compare this text against data stored in a database. When a match is found, I want to then record the following DC: lines that succeed the matched HD:. Pseudo code: while ( the_file_pointer_isnt_end_of_file) { line = getCurrentLineFromFile title = parseTitleFrom(line) matched = searchForMatchInDB(line) if ( matched ) { recordTheDCLines // <- Best way to do this? } } My problem is that because I am reading line by line, what is the best way to trigger the script to start saving DC lines, and then when they are finished save them to the database? I have a vague idea, but have yet to properly implement it. I would love to hear the communities ideas\suggestions! Thank you.

    Read the article

  • Suggestions for a Self-serv advertising service

    - by Mystere Man
    I am seeking a self-serv advertising service for my websites, but I have a few restrictions that seem to make what i'm looking for hard to find. Specifically, I want to place "advertise here" links on my pages and allow end-users to purchase advertising on that site, page, and location. These ads will not be part of a national network. Supports multi-tenancy - That is, I have a number of domains using the same "web application" but with customized content per domain. When a customer wants to advertise on a given domain, then the ads will only appear on that domain and on that page of the domain (even though the page name may be the same across multiple domains). Supports fixed ad prices, not just CPC. I need monthly and quarterly pricing regardless of performance. Integrates with OpenX and other ad networks, so that if there is no self-serv on a given zone, it will use national advertising or direct advertising. Shiny Ads has much of this, but i'm looking for alternatives, as their prices are a bit crazy (20%) and can only do PayPal.

    Read the article

  • htaccess correct, Apache logs still showing the evil visitors with 200 code

    - by bulgin
    I hope someone can help me. Please take a look at the following snippet of Apache logs: 95-169-172-157.evilvisitor.com - - [12/Nov/2012:09:46:02 -0500] "GET /the-page-I-dont-want-to-deliver.html HTTP/1.1" 200 9171 "http://hackers.ru/" "Mozilla/4.0 (MSIE 6.0; Windows NT 5.1; Search)" I have the following included in my .htaccess for the root directory of the website and there are no other .htaccess files anywhere that would affect this: RewriteEngine On Options +FollowSymLinks ServerSignature Off ErrorDocument 403 "Nothing Interesting Here" order allow,deny deny from evilvisitor.com deny from hackers.ru deny from anonymouse.org allow from all I also have GeoIP functioning properly and have this included there: #for stuff from different countries RewriteCond %{ENV:GEOIP_COUNTRY_CODE} ^(UA|TR|RU|RO|LV|CZ|IR|HR|KR|TW|NO|NL|NO|IL|SE) RewriteRule ^(.*)$ [R=F,L I know this works because whenever I attempt to access the website from a proxy in say, Spain, I get the error message. I also know it works because when accessing the website from anonymouse.org, the proper error code page is displayed. So then why am I still getting these visitors who successfully access the page I don't want them to see with an Apache 200 code when it should be an error code?

    Read the article

  • Extension GLX missing...on a desktop PC

    - by Bart van Heukelom
    I just installed Ubuntu 12.10 on a new PC with an Nvidia GTX 560 graphics card, but after installing the Nvidia proprietary drivers (either -current or -current-updates), Unity won't start. When trying to start it manually I get the message "extension GLX missing". I've searched around and found results like this question which point out it's a problem with Nvidia Optimus laptops. However, I don't have this problem on a laptop, but on a desktop PC. lshw output for the graphics card: *-display description: VGA compatible controller product: GF114 [GeForce GTX 560 SE] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:16 memory:f4000000-f5ffffff memory:e0000000-e7ffffff memory:e8000000-ebffffff ioport:e000(size=128) memory:f6000000-f607ffff and CPU: *-cpu description: CPU product: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz vendor: Intel Corp. physical id: 40 bus info: cpu@0 version: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz slot: SOCKET 0 size: 1600MHz capacity: 3800MHz width: 64 bits clock: 100MHz capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms cpufreq configuration: cores=4 enabledcores=4 threads=4

    Read the article

  • Self-serv advertising service

    - by Mystere Man
    I am seeking a self-serv advertising service for my websites, but I have a few restrictions that seem to make what i'm looking for hard to find. Specifically, I want to place "advertise here" links on my pages and allow end-users to purchase advertising on that site, page, and location. These ads will not be part of a national network. Supports multi-tenancy - That is, I have a number of domains using the same "web application" but with customized content per domain. When a customer wants to advertise on a given domain, then the ads will only appear on that domain and on that page of the domain (even though the page name may be the same across multiple domains). Supports fixed ad prices, not just CPC. I need monthly and quarterly pricing regardless of performance. Integrates with OpenX and other ad networks, so that if there is no self-serv on a given zone, it will use national advertising or direct advertising. Shiny Ads has much of this, but i'm looking for alternatives, as their prices are a bit crazy (20%) and can only do PayPal.

    Read the article

  • Alternative to nofollow: custom 302 url shortener?

    - by Dogweather
    Here's the scenario: lots of blogging platforms make it tedious to insert nofollow into links within the post content. I.e., you need to edit the html, format it correctly, etc. I have a client who posts lots of content with links that should be nofollow'ed, and I thought of a novel way to handle this, since the blogging platform they're using makes it hard: I install a URL shortener web app on the client's domain. The shortener works as normal, except it redirects via 302 instead of 301. The pagerank will therefore stay at the shortener's domain, and not flow on to the target site. Part 2: In order to get the pagerank to collect meaningfully, say on the site's home page, the shortened URLs would be generated like this: /link?12345 instead of /link/12345. And then, the path /link would 301 to the home page. This way, the id is a param, not a path element. And thus, all the incoming shortened links are going to one path, which transfers pagerank to the home page. So that's my idea. I wanted to see if anybody could find problems with it. Thanks!

    Read the article

  • Account Listings on APress and O'Reilly

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/20/e-books-from-apress-and-oreilly.aspxIn recent days both APress and O'Reilly have radically improved the way they display items registered against your account with them.APress now show only one line per e-book and the multitude of formats is handled by a drop-down list. The result is that the list of APress books I have bought now requires less paging through. The only things that the APress lacks are:The ability to show all on the page (currently options for 10,20 and 50 per page)The ability to sort on title or date bought or date updatedO'Reilly have always shown the formats available by a series of hyperlinks along one line. They have improved their list as follows:You can sort on title or date bought or date updatedClicking on a line shows full detail of the item (include image, download details, errata link and catalog page). Clicking again collapses the detail.You can select all your purchased items together or just show E-Books or Print or VideosNow why is the date updated important? Updates are issued for various books  (particularly those made available whilst still being written) - the publishers very kindly email you when an update is available but finding it in the list to download it again is not as easy as you think, however sort on release date and they are easy to find!

    Read the article

  • How to copy or replicate a complex website to local file and modify then

    - by Andre Chenier
    I am not good at designing the visual side of a website. I found a website which I gave 10 over 10 because its functionality suits my aims and also it seems very esthetical. I know HTML, PHP, mySQL and some degree of CSS. I don't know JS, Ajax, Jquery. So I want to replicate this web site (save completely) on my local and then modify it. (content, colors, icons etc.) I saved this web site in Chrome and IE. After clicking the site from my local folder, a saw an ugly & non-working site. My aim is to understand the functions of the parts that I don't know. For example when I delete a js in its page what will happen as the result of the deletion operation. Since the page is too complex it has lots of css, js files to download inside. I don't want to deal it manually. Is there any alternative and easy way to get the web page completely to my local which also works like a charm from local? regards

    Read the article

  • The Importance of Fully Specifying a Problem

    - by Alan
    I had a customer call this week where we were provided a forced crashdump and asked to determine why the system was hung. Normally when you are looking at a hung system, you will find a lot of threads blocked on various locks, and most likely very little actually running on the system (unless it's threads spinning on busy wait type locks). This vmcore showed none of that. In fact we were seeing hundreds of threads actively on cpu in the second before the dump was forced. This prompted the question back to the customer: What exactly were you seeing that made you believe that the system was hung? It took a few days to get a response, but the response that I got back was that they were not able to ssh into the system and when they tried to login to the console, they got the login prompt, but after typing "root" and hitting return, the console was no longer responsive. This description puts a whole new light on the "hang". You immediately start thinking "name services". Looking at the crashdump, yes the sshds are all in door calls to nscd, and nscd is idle waiting on responses from the network. Looking at the connections I see a lot of connections to the secure ldap port in CLOSE_WAIT, but more interestingly I am seeing a few connections over the non-secure ldap port to a different LDAP server just sitting open. My feeling at this point is that we have an either non-responding LDAP server, or one that is responding slowly, the resolution being to investigate that server. Moral When you log a service ticket for a "system hang", it's great to get the forced crashdump first up, but it's even better to get a description of what you observed to make to believe that the system was hung.

    Read the article

  • Internet Timeouts with TP-Link TL-WN821N v2 wireless usb stick

    - by user1622959
    A short time after accessing the internet, the browser/download times out. Before the timeout, the internet works OK briefly; afterwards, the wireless is still connected with a strong signal, but every internet access results in a timeout. When I leave the PC for a while, the internet is back just to timeout again as soon as I start using it. The same happens when I reconnect to the router. Also, when I surf the internet, it takes a couple of minutes until the timeout, but when I download something, it times out in a matter of seconds. The wireless adapter works just fine in Windows and internet via ethernet cable works just fine in Ubuntu. Does anyone have the same problem or knows a solution. I use Ubuntu 12.10 x64. The problem occurs since I installed ubuntu (which was a few days ago). Here some stuff that might be usefull: serus@serus-Ubuntu-PC:~$ lsusb Bus 002 Device 002: ID 0cf3:1002 Atheros Communications, Inc. TP-Link TL-WN821N v2 802.11n [Atheros AR9170] serus@serus-Ubuntu-PC:~$ lsmod Module Size Used by carl9170 82083 0 serus@serus-Ubuntu-PC:~$ modinfo carl9170 filename: /lib/modules/3.5.0-21- generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko alias: arusb_lnx alias: ar9170usb firmware: carl9170-1.fw description: Atheros AR9170 802.11n USB wireless serus@serus-Ubuntu-PC:~$ iwconfig wlan0 IEEE 802.11bgn ESSID:"virginmedia0137463" Mode:Managed Frequency:2.462 GHz Access Point: A0:21:B7:F8:29:B6 Bit Rate=240 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=66/70 Signal level=-44 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:1399 Invalid misc:18 Missed beacon:0 serus@serus-Ubuntu-PC:~$ sudo lshw -C network *-network description: Wireless interface physical id: 1 bus info: usb@2:2 logical name: wlan0 serial: 00:27:19:bb:00:19 capabilities: ethernet physical wireless configuration: broadcast=yes driver=carl9170 driverversion=3.5.0-21-generic firmware=1.9.4 ip=192.168.0.6 link=yes multicast=yes wireless=IEEE 802.11bgn

    Read the article

  • What's wrong with cplusplus.com?

    - by Kerrek SB
    This is perhaps not a perfectly suitable forum for this question, but let me give it a shot, at the risk of being moved away. There are several references for the C++ standard library, including the invaluable ISO standard, MSDN, IBM, cppreference, and cplusplus. Personally, when writing C++ I need a reference that has quick random access, short load times and usage examples, and I've been finding cplusplus.com pretty useful. However, I've been hearing negative opinions about that website frequently here on SO, so I would like to get specific: What are the errors, misconceptions or bad pieces of advice given by cplusplus.com? What are the risks of using it to make coding decisions? Let me add this point: I want to be able to answer questions here on SO with accurate quotes of the standard, and thus I would like to post immediately-usable links, and cplusplus.com would have been my choice site were it not for this issue. Update: There have been many great responses, and I have seriously changed my view on cplusplus.com. I'd like to list a few choice results here; feel free to suggest more (and keep posting answers). As of June 29, 2011: Incorrect description of some algorithms (e.g. remove). Information about the behaviour of functions is sometimes incorrect (atoi), fails to mention special cases (strncpy), or omits vital information (iterator invalidation). Examples contain deprecated code (#include style). Inexact terminology is doing a disservice to learners and the general community ("STL", "compiler" vs "toolchain"). Incorrect and misleading description of the typeid keyword.

    Read the article

  • Will having multiple domains improve my seo?

    - by Anonymous12345
    Lets say I have a domain already, for example www.automobile4u.com (not mine), with a website fully running and all. The title of my "Website" says: <title>Used cars - buy and sell your used cars here</title> Also, lets say I have fully SEO the website so when people searching for the term buy used cars, I end up on the second or first page. Now, I want to end up higher, so I go to the google adwords page where you can check how many searches are made on specific terms. Lets say the term "used cars" has 20 million searches each month. Here comes the question, could I just go and buy that domain with the search terms adress, in this case www.usedcars.com and make a redirect to my original page, and this way when people search for "used cars", my newly bought domain name comes up redirecting people to my original website (www.automobile4u.com)? The reason I believe this benefits me is because it seems search engines first of all check website adresses matching the search, so the query "used cars" would automatically bring www.usedcars.com to the first result right? What are the downsides for this? I already know about google spiders not liking redirects, but there are many methods of redirecting... Is this a good idea generally?

    Read the article

  • Multiple Businesses at The Same Physical Address - SEO / Google Places

    - by Howdy_McGee
    I was wondering what kind if there would be any negative effects to have multiple businesses having the exact same physical address on their website. Currently we have five businesses at the exact same address and it shows on their website, so when people google one of the five businesses address their going to get multiple results from multiple website most of which will not be what their looking for. What is a way around this / what can I do about this? Would adding "Suite Numbers" be a solution? A thought occurred that it might be a good idea to create a landing page for users that are looking up a business by it's address via google. The page will bring up multiple businesses since we have a few at the same address but if we have a landing page at the top which then leads to multiple businesses that might solve the multiple address seo problem. Going to keep researching it though. I also know for Google Places (possible bing local and yahoo local) this could also become a problem. I've submitted an inquiry with them but I wanted to know if anybody had a ready-made solution around this so that Google doesn't bunch all these companies together into one. Thanks!

    Read the article

  • Can't see like plugin iframe on (at least) some browsers [migrated]

    - by MEM
    Not sure why. I grabbed the code from: http://developers.facebook.com/docs/reference/plugins/like/ And as stated there, we can read: "href - the URL to like. The XFBML version defaults to the current page. Note: After July 2013 migration, href should be an absolute URL" So I did. <body> <div> <iframe src="//www.facebook.com/plugins/like.php?href=https%3A%2F%2Fwww.facebook.com%2Fprojectokairos&amp;width=100&amp;height=21&amp;colorscheme=light&amp;layout=button_count&amp;action=like&amp;show_faces=false&amp;send=false" scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:100px; height:21px;" allowTransparency="true"></iframe> </div> </body> Could this be related with the fact that the page is unpublished? I hope not because I do need to place the button here and there on several pages before the FB page goes live.

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >