Search Results

Search found 11753 results on 471 pages for 'technical links'.

Page 31/471 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Description Class for Navigation Links not working

    - by Carmel
    My problem is with my navigation links. I've created a class for the links so that the link color is different from the typical links throughout. The problem is that the color defined in a:visited is taking precedence over the color defined in a:link. I've tried everything and can't work this out. Any suggestions?

    Read the article

  • Making links clickable from comments in php

    - by neat
    im trying to create functions that will give my chat clickable links.... here are the functions i've created <?php //makes links starting with http clickable function makehttpclickable($text){ return preg_replace('!(((f|ht)tp://)[-a-zA-Z?-??-?()0-9@:%_+.~#?&;//=]+)!i', '<a href="$1">$1</a>', $text); } //makes links starting www. http clickable function clickywww($www){ return preg_replace('!((www)[-a-zA-Z?-??-?()0-9@:%_+.~#?&;//=]+)!i', '<a href="$1">$1</a>', $www); } /function that gives me an error! function clickydotcom($noob){ return preg_replace('!([-a-zA-Z?-??-?()0-9@:%_+.~#?&;//=]+)(\.com)!i'.'!([-a-zA-Z?-??-?()0-9@:%_+.~#?&;//=]+)(\.com)!f', '<a href="$1.com$f">$1.com</a>', $noob); } I've been getting an unkown modifier error. Warning: preg_replace() [function.preg-replace]: Unknown modifier '!' So Anyways any help would be nice on how i can make all types of links clickable

    Read the article

  • Thunderbird: filters don't match links

    - by Gregory MOUSSAT
    I use filters to remove some undesirable messages (in addition to the intergrated spam filter). This is great to avoid tons of boring people who want to sell me tons of boring stuff. My problem is, since years (so with every Thunderbird release I ever had, even the current one which is up to date) it is unable to filter links. For example I want to delete every messages containing a link to http://xxxxx.emv3.com/xxxxxx I never managed to remove those emails. I use a filter on the body, checking if it contains emv3 but this never match. Those emails are in HTML format, and the links are displayed as a text like "Visit our website" or so. If I write a HTML email with a link, my filter works. When this is a spam, this never works. When I save the email to a text file, I open it with notepad and I see several http://xxxxx.emv3.com/xxxx Any idea why this don't work ? And how can I do ?

    Read the article

  • Appending target="_blank" to links in an iframe without using <body onload>

    - by CincauHangus
    I'm trying to change the links in an iframe to load in a new window instead of the iframe itself. Currently I use this code in head: $(document).ready(function() { var oIFrame = document.getElementById("iframeID"); var oDoc = (oIFrame.contentWindow || oIFrame.contentDocument); if(oDoc.document) oDoc = oDoc.document; var links = oDoc.getElementsByTagName("a"); for(var i=0; i<links.length; i++) { links[i].target="_blank"; } }); However, the code above is triggered before the iframe is fully loaded with its contents. I know this code would work if it's triggered in the body onload attribute, but I'd like to avoid that method and implement it in a function or a file instead.

    Read the article

  • Forbid links to open new tabs in Google Chrome

    - by Andrzej
    I'm looking for a solution to forbid links in Google Chrome to open new tabs (in most cases it is a target="_blank" issue). So, I want all links to open pages in currently active tab, not in new tab. I tried a wide range of addons (for ex. "Death to target=_blank") and greasemonkey scripts that are supposed to remove target=_blank attribute but none of them worked. It is extremally anoying when I want to switch between accounts in GMail or navigate from GMail to GDrive. I always get the new tab opened and I need to close last tab manually.

    Read the article

  • Links from Google appending index.php to my URL

    - by davykiash
    I recently put up a site and I have been doing some SEO. However I noticed that links from Google search append index.php to my links. For example a site page which clearly appears as www.example.com/index/why on search together with correct content sample when clicked on ends up in the new browser as www.example.com/index.php/why Note that on my site all links are redirected to SSL and I use the MVC stucture. Any directives that am may be missing?

    Read the article

  • Load links as unselected in jQuery

    - by shummel7845
    I have sets of links with id and name attributes, id for individual identifiers and names for groupings. In my jQuery, I have a click function that manipulates the CSS based on what a user clicks on. But I want to start the freshly loaded page with some of the links disabled (no href attribute) and with a style applied. I tried this, but it didn't work. $(function() // Alias for $(document).ready() { var $links = $('li a'); $links.(function(){ $('a[name="beam"]').filter(function() { $(this).addClass('unavailable'); $(this).removeAttr('href'); }); }); ...... Ideas? (Beam is one of the groupings by name).

    Read the article

  • Print Any Document Type with AutoVue Document Print Services

    - by [email protected]
    The newly released AutoVue Document Print Services allow development organizations to automate and process high volume printing operations, of both business and technical document types, within their broader enterprise applications. For many organizations, their printing processes are challenged by the fact that they can only print a small subset of the documents required by their enterprise users. By integrating AutoVue Document Print Services, and deploying them in conjunction with their existing print server solutions, organizations can address that challenge and automate the printing of virtually any document type required in any business process, greatly extending the value of their print server solutions, and improving business processes and workforce productivity. For further details, check out the AutoVue Document Print Services datasheet.

    Read the article

  • Associate Tech Support to Code Development [on hold]

    - by Abhay
    I have been selected for the first phase selection criteria of a company called CITRIX for the role of Associate Tech Support. Now, we have to undergo a 3 months in-depth technical training (most probably no certificate) and will only get the job on getting through the final test which includes selecting 50% of the total selected candidates in the first phase. Actually, I want to get in the field of coding and there lies my passion. Is there any way i can into any development department of this or any other company using my current profile which i can get into ?? Actually, i was wondering whether to go for the training or go for any java based course (6 months) for certification ??? Please note : The Company is not asking for any bonds

    Read the article

  • How much to charge for being available for support?

    - by Tvrdy
    I am working in small development firm, our main products are ASP.NET and SQL server based. We are deploying a new version of our product and client wants 4-hour response time technical support. Some of this support time will fall outside regular business hours and our boss asked us how much should we charge for being available outside regular business hours? Is it 50% of our hourly fee, 20% ? Any suggestion is appreciated, he is pushing us to speak up and I don't have any idea. I don't want to underprice my free time. Thanks.

    Read the article

  • What tool/framework to use for technical documentation?

    - by Pangea
    We develop products and frameworks to be used with in our organization. I am looking for programmer friendly documentation tools. I have researched on few options sometime back but couldn't decide which one to use. I am looking for suggestions from the people who already used these tools. docbook: springframework and hibernate use this format and this looks good. but I believe they have customized the default xslt/stylesheet. Can I copy and use their xslt and css (ofcourse with colors and images changed). Can I integrate the doc generation using maven? wiki: this is not friendly to the technical document writers and the documentation doesn't look professional. versioning is also not possible I believe word docs: this is what we use currently but it is hard to link and reuse common documents. DITA?

    Read the article

  • Delphi - online technical tests

    - by RBA
    I have searched for some online Delphi programming tests, and except the small test for Delphi certification and several tests on Delphi.about.com I did find nothing. Any ideas where I can find some Delphi online tests? LE: defining online Delphi programming tests: - technical questions about Delphi fundamentals,Data types,classes, libraries, generics, database concepts, etc. Examples here (Delphi Developer Certification Exam Study Guide) and here. LE2: tests to take after you have read all the articles from this question: Questions every good Delphi developer should be able to answer?

    Read the article

  • Pros and cons of Localisation of technical words ?

    - by paercebal
    This question is directed to the non-english speaking people here. It is somewhat biased because SO is an "english-speaking" web forum, so... In the other hand, most developers would know english anyway... In your locale culture, are technical words translated into locale words ? For example, how "Design Pattern", or "Factory", or whatever are written/said in german, spanish, etc. etc. when used by IT? Are the english words prefered? The local translation? Do the two version (english/locale) are evenly used? Edit Could you write with your answer the locale translation of "Design Pattern"? In french, according to Wikipedia.fr, it is "Patron de conception", which translates back as "Model of Conceptualization" (I guess).

    Read the article

  • SEO effect of “You are leaving this site” page for outbound links?

    - by Timo Huovinen
    The problem I am working on an aggregation website that collects reviews about specific products from various websites. The site has many thousands of outbound links (with "nofollow" attributes) to the content source websites where the reviews were collected from. The site has far more outbound links than inbound links and I have read that this is bad for SEO. The question Would adding an intermediate «You are leaving this site» disclaimer/warning page like this hurt search engine rankings? And can you provide any links about this topic? p.s. The exit page would be a POST form instead of a script, that notifies the user that he/she is leaving this site and provides a button to continue to the other website. p.p.s This kind of idea is implemented on many forums, aggregation websites with the purpose of warning the user that he/she is leaving this site and to block search engine bots from following those links because search bots do not submit forms.

    Read the article

  • Does Webmaster Tools list traffic from ads as inbound links?

    - by Mohamad
    In Webmaster Tools, under the inbound links section, do ads get counted as inbound links? I am doing a review of inbound links on a website and found that most of them are sourced from meaningless blogs and spam websites. Before I accuse anyone of not doing their job properly, I would like to know something: Is it possible that those inbound links were generated when an ad for the website appeared on the spam website? An SEO firm was paid handsomly to generate inbound links and I am afraid all they did was submit material to spam blogs and websites.

    Read the article

  • CIC 2010 - Ghost Stories and Model Based Design

    - by warren.baird
    I was lucky enough to attend the collaboration and interoperability congress recently. The location was very beautiful and interesting, it was held in the mountains about two hours outside Denver, at the Stanley hotel, famous both for inspiring Steven King's novel "The Shining" and for attracting a lot of attention from the "Ghost Hunters" TV show. My visit was prosaic - I didn't get to experience the ghosts the locals promised - but interesting, with some very informative sessions. I noticed one main theme - a lot of people were talking about Model Based Design (MBD), which is moving design and manufacturing away from 2d drawings and towards 3d models. 2d has some pretty deep roots in industrial manufacturing and there have been a lot of challenges encountered in making the leap to 3d. One of the challenges discussed in several sessions was how to get model information out to the non-engineers in the company, which is a topic near and dear to my heart. In the 2D space, people without access to CAD software (for example, people assembling a product on the shop floor) can be given printouts of the design - it's not particularly efficient, and it definitely isn't very green, but it tends to work. There's no direct equivalent in the 3D space. One of the ways that AutoVue is used in industrial manufacturing is to provide non-CAD users with an easy to use, interactive 3D view of their products - in some cases it's directly used by people on the shop floor, but in cases where paper is really ingrained in the process, AutoVue can be used by a technical publications person to create illustrative 2D views that can be printed that show all of the details necessary to complete the work. Are you making the move to model based design? Is AutoVue helping you with your challenges? Let us know in the comments below.

    Read the article

  • The spork/platypus average: shameless self promotion

    - by Roger Hart
    This is the video of presentation I gave at UA Europe and TCUK this year. The actual sub-title was "Content strategy at Red Gate Software", but this heading feels more honest. For anybody who missed it, or is just vaguely interested, here's a link to me talking about de-suckifying the web. You can find the slideshare deck here, too* Watching it back is more than a little embarrassing, and makes me really, really want to do a follow up, so I can do three things: explain the rest of the big web project, now we've done it give some data on the outcome of the content review make a grovelling apology to our marketing guys, who I've been unfairly mean to in a childish effort to look cool There are a whole bunch of other TCUK presentations online, too. You can find them all here: http://tiny.cc/tcuk10_videos I'd particularly recommend Chris Atherton's: "Everything you always wanted to know about psychology and technical communication" - it's full of cool stuff. You should probably also watch David Black's opening keynote, which managed to make my hour of precocious grandstanding look measured, meek, and helpful. He actually makes some interesting points, but you'd basically have to ship Richard Dawkins off to Utah, if you wanted to go further out of your way to aggravate your audience. It does give an engaging account of running a large tech comms project, and raise some questions about how we propose to understand a world where increasing amounts of our stuff gets done by increasingly many increasingly complicated tissues of APIs. Well, sort of. That's what all the notes I made were about, anyway.   *Slideshare ate my fonts. Just so we're clear on this: I'd never use badly-kerned Arial in a presentation. Don't worry.

    Read the article

  • Sucking Less Every Year?

    - by AdityaGameProgrammer
    Sucking Less Every Year -Jeff Atwood I had come across this insightful article.Quoting directly from the post I've often thought that sucking less every year is how humble programmers improve. You should be unhappy with code you wrote a year ago. If you aren't, that means either A) you haven't learned anything in a year, B) your code can't be improved, or C) you never revisit old code. All of these are the kiss of death for software developers. How often does this happen or not happen to you? How long before you see an actual improvement in your coding ? month, year? Do you ever revisit Your old code? How often does your old code plague you? or how often do you have to deal with your technical debt. It is definitely very painful to fix old bugs n dirty code that we may have done to quickly meet a deadline and those quick fixes ,some cases we may have to rewrite most of the application/code. No arguments about that. Some of the developers i had come across argued that they were already at the evolved stage where their coding doesn't need improvement or cant get improved anymore. Does this happen? If so how many years into coding on a particular language does one expect this to happen? Related: Ever look back at some of your old code and grimace in pain? Star Wars Moment in Code "Luke! I am your code!" "No! Impossible! It can't be!"

    Read the article

  • Sucking Less Every Year ?

    - by AdityaGameProgrammer
    Sucking Less Every Year A trail of thought that had been on my mind for a while Quoting directly from the post I've often thought that sucking less every year is how humble programmers improve. You should be unhappy with code you wrote a year ago. If you aren't, that means either A) you haven't learned anything in a year, B) your code can't be improved, or C) you never revisit old code. All of these are the kiss of death for software developers. How often does this happen or not happen to you? How long before you see an actual improvement in your coding ? month, year? Do you ever revisit Your old code? How often does your old code plague you? or how often do you have to deal with your technical debt. It is definitely very painful to fix old bugs n dirty code that we may have done to quickly meet a deadline and those quick fixes ,some cases we may have to rewrite most of the application/code. No arguments about that. Some of the developers i had come across argued that they were already at the evolved stage where their coding doesn't need improvement or cant get improved anymore. Does this happen? If so how many years into coding on a particular language does one expect this to happen?

    Read the article

  • Learn about the Exciting New WebCenter Content 11.1.1.8 Features by Attending the Advisor Webcast on November 21st!

    - by AlanBoucher
    Have you been looking for a place to store your content securely and in an organized fashion, while needing to access it while you are on the go? Well you can!  Learn about the new Mobile App for WebCenter Content 11.1.1.8 along with other exciting new features by attending the Advisor Webcast called WebCenter Content 11.1.1.8 Overview and Support Information. November 21, 2013 at 11 am ET, 10 am CT, 9 am MT, 8 am PT, 5:00 pm, Europe Time (Paris, GMT+01:00) This one-hour session is recommended for technical and functional users who have installed or will install WebCenter Content 11.1.1.8 or would just like more information on the latest release. TOPICS WILL INCLUDE: Overview of new features and enhancements Installation of the new Content UI Upgrading from older WebCenter Content versions Support issues including latest patches Roadmap of proposed additional features REGISTER NOW and mark your Calendar:1. Event address for attendees: https://oracleaw.webex.com/oracleaw/onstage/g.php?d=590991341&t=a2. Register for the meeting.Once the host approves your request, you will receive a confirmation email with instructions for joining the meeting.

    Read the article

  • Malware - Technical anlaysis

    - by nullptr
    Note: Please do not mod down or close. Im not a stupid PC user asking to fix my pc problem. I am intrigued and am having a deep technical look at whats going on. I have come across a Windows XP machine that is sending unwanted p2p traffic. I have done a 'netstat -b' command and explorer.exe is sending out the traffic. When I kill this process the traffic stops and obviously Windows Explorer dies. Here is the header of the stream from the Wireshark dump (x.x.x.x) is the machines IP. GNUTELLA CONNECT/0.6 Listen-IP: x.x.x.x:8059 Remote-IP: 76.164.224.103 User-Agent: LimeWire/5.3.6 X-Requeries: false X-Ultrapeer: True X-Degree: 32 X-Query-Routing: 0.1 X-Ultrapeer-Query-Routing: 0.1 X-Max-TTL: 3 X-Dynamic-Querying: 0.1 X-Locale-Pref: en GGEP: 0.5 Bye-Packet: 0.1 GNUTELLA/0.6 200 OK Pong-Caching: 0.1 X-Ultrapeer-Needed: false Accept-Encoding: deflate X-Requeries: false X-Locale-Pref: en X-Guess: 0.1 X-Max-TTL: 3 Vendor-Message: 0.2 X-Ultrapeer-Query-Routing: 0.1 X-Query-Routing: 0.1 Listen-IP: 76.164.224.103:15649 X-Ext-Probes: 0.1 Remote-IP: x.x.x.x GGEP: 0.5 X-Dynamic-Querying: 0.1 X-Degree: 32 User-Agent: LimeWire/4.18.7 X-Ultrapeer: True X-Try-Ultrapeers: 121.54.32.36:3279,173.19.233.80:3714,65.182.97.15:5807,115.147.231.81:9751,72.134.30.181:15810,71.59.97.180:24295,74.76.84.250:25497,96.234.62.221:32344,69.44.246.38:42254,98.199.75.23:51230 GNUTELLA/0.6 200 OK So it seems that the malware has hooked into explorer.exe and hidden its self quite well as a Norton Scan doesn't pick anything up. I have looked in Windows firewall and it shouldn't be letting this traffic through. I have had a look into the messages explorer.exe is sending in Spy++ and the only related ones I can see are socket connections etc... My question is what can I do to look into this deeper? What does malware achieve by sending p2p traffic? I know to fix the problem the easiest way is to reinstall Windows but I want to get to the bottom of it first, just out of interest.

    Read the article

  • NFSv4 "Too many levels of symbolic links" error

    - by user1434058
    Both machines are running Ubuntu 12.04 Remote NFSv4 Client $ ls /mnt/storage/aaaaaaa_aaa/bbbb/cccc_ccccc gives this error: ls: reading directory .: Too many levels of symbolic links How can I fix this? When error occurs ls start listing the files, however PHP brakes. On the NFSv4 Server In /etc/fstab: /mnt/storage /srv/storage none bind 0 0 In /etc/exports /srv 192.168.1.0/24(rw,async,insecure,no_subtree_check,crossmnt,fsid=0,no_root_squash) /srv/storage 192.168.1.0/24(rw,async,nohide,insecure,no_subtree_check,no_root_squash) ERROR root@ds:root@ds:/mnt/storage/foreign_dbs/imdb/imdb_htmls# ls -l | head ls: reading directory .: Too many levels of symbolic links total 10302840 -rw-r--r-- 1 root root 10484 Jul 5 13:56 0019038.gz -rw-r--r-- 1 root root 16264 Mar 30 00:31 0259701.gz -rw-r--r-- 1 root root 13784 Mar 30 14:20 1000000.gz -rw-r--r-- 1 root root 12741 Mar 30 13:04 1000003.gz -rw-r--r-- 1 root root 12794 Mar 30 12:40 1000004.gz -rw-r--r-- 1 root root 13123 Mar 30 12:07 1000005.gz -rw-r--r-- 1 root root 13183 Mar 30 12:04 1000006.gz -rw-r--r-- 1 root root 13443 Jul 4 01:16 1000007.gz -rw-r--r-- 1 root root 12968 Mar 30 11:05 1000008.gz I came across it in PHP. scandir would return 1612577.gz & 1612579.gz, but skips 1612578.gz and yet the file types and properties are identical on them and this only happens on the nfs client, works 100% on the server

    Read the article

  • rsync not copying hard links

    - by A.Ellett
    I have two computers (both MacBook Airs) for which I sync one directory tree in both, but not the entire hard drive or any other directories. Let's say on computer A the directory is /Users/aellett/projects Let's say on computer B the directory is /Users/bellett/projects Generally, I'll log into computer B and then remotely connect to computer A as user 'aellett'. As super user I sync the two project directories as follows: rsync -av /Volumes/aellett/projects/ /Users/bellett/projects/ and this works as expected. On both computers I have another file letter.txt in a different directory which is not getting synced. Let's say on computer A the file is found in /Users/aellett/letters On computer B the file is found in /Users/bellett/correspondence Generally, I don't want to share what's not included in /Users/<username>/projects. But I do want to share this particular file. So on both computer I made a correspondence directory in projects. And then I made hard links as follows On computer A: ln /Users/aellett/letters/letter.txt /Users/aellett/projects/correspondence/letter.txt On computer B: ln /Users/bellett/correspondence/letter.txt /Users/aellett/projects/correspondence/letter.txt The next time I synced the two computers I did the following rsync -av -H /Volumes/aellett/projects/ /Users/bellett/projects/ When I checked on computer B, /Users/bellett/projects/correspondence/letter.txt was correctly synced. But, the hardlink to /Users/bellett/correspondence/letter.txt was no longer there. In other words, /Users/bellett/projects/correspondence/letter.txt was identical to /Users/aellett/projects/correspondence/letter.txt but it differed from /Users/bellett/correspondence/letter.txt. Since these two files were hard linked on both computers, I expected them to still have the hard link. Why are my hard links not being preserved?

    Read the article

  • Rewriting html links with modproxyperlhtml

    - by Juancho
    I'm trying to setup an Apache reverse proxy using mod_proxy and modproxyperlhtml. This is my scenario: Domain for the proxy: http : // www.myserver.com/ Destination server (the one behind the proxy): http : // myserver.foo.com/myapp/ I'm sorry that I have to space the URL but serverfault doesn't allow me to post more than two links as "spam protection mechanism" (ridiculous on a site where you ask questions about servers and it's really probable to post more than two times the same URL's to explain your question). The idea is to map http : // www.myserver.com/ to http : // myserver.foo.com/myapp/ . Note that the path on the proxy is / and on the destination server is /myapp/. All of the examples I can find on the net (like the one on the official documentation of modproxyperlhtml) are the other way around, ie. path on the proxy /myapp/ and path on the destination server /. This is my current config that doesn't work: ProxyPass / http : // myserver.foo.com/myapp/ ProxyPassReverse / http : // myserver.foo.com/myapp/ PerlInputFilterHandler Apache2::ModProxyPerlHtml PerlOutputFilterHandler Apache2::ModProxyPerlHtml SetHandler perl-script PerlSetVar ProxyHTMLVerbose "On" LogLevel Info <Location / > # ProxyPassReverse /myapp/ PerlAddVar ProxyHTMLURLMap "/myapp/ /" PerlAddVar ProxyHTMLURLMap "http : // myserver.foo.com /" </Location> The examples use the ProxyPassReverse inside the Location directive, but on my case doesn't work, only when outside. With this configuration the links aren't being replaced as they should be, my guess is that the location isn't being found, thus the rewrite rules aren't being applied. The error log only shows that it uncompresses the content, searches it but doesn't find anything: [Tue Nov 13 0842:05 2012] [warn] [ModProxyPerlHtml] Uncompressing text/html; charset=UTF-8, Content-Encoding: gzip\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Compressing output as Content-Encoding: gzip\n [Tue Nov 13 08:42:06 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n What could be wrong ?

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >