Search Results

Search found 8013 results on 321 pages for 'clean urls'.

Page 167/321 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Cannot connect to telnet server

    - by BloodPhilia
    So, I can't use telnet to connect to any server but it works fine from a different computer. It just says it can't connect. I tried the following things: Disable firewall and AV protection. (Basically, there was no security feature left online) Telnet is set to "Trusted" in my AV protection. (Kaspersky Internet Security 2011) Using Putty to telnet, but apparently Putty's connection is also inhibited. (Says it can't connect to host) Disabling the telnet client in Control Panel and then re-enabling it. (Windows 7 Ultimate) hosts file is clean. Checked for nasties using MBAM and KIS 2011 as well as going though my HijackThis logs, nothing found. I can connect to the same machines/servers through the web browser, ping, tracert, etc. Only telnet seems to be blocked. Any other thoughts?

    Read the article

  • Errors checking and opening a URL using PHP

    - by Jean
    Hello Here is one script with out any errors $url="http://yahoo.com"; $file1 = fopen($url, "r"); $content = file_get_contents($url); $t_beg = explode('<title>',$content); $t_end = explode('</title>',$t_beg[1]); echo $t_end[0]; And here is the same script using a look to check multiple urls and getting errors for ($j=1;$j<=$i;$j++) { if ($x[$j]!=''){ $t_u = "http:".$x[$j]; $file2 = fopen($t_u, "r"); $content2 = file_get_contents($t_u); $t_beg = explode('<title>',$content); $t_end = explode('</title>',$t_beg[1]); echo $t_end[0]; } } The error is Warning: fopen() [function.fopen]: php_network_getaddresses: getaddrinfo failed: No such host is known. in g:/ What exactly is wrong here?

    Read the article

  • Finding proof of server being compromised by Black Hole Toolkit exploit

    - by cosmicsafari
    I recently took over maintenance of a company server. (Just Host, C Panel, Linux server), theres a tonne of websites on it which i know nothing about. It had came to my attention that a client had attempted to access one of the websites hosted on this server and was met with a warning from windows defender. It had blocked access because it said the website had been compromised by the Black Hole Toolkit or something to that effect. Anyway I went in and updated various plugins and deleted some old suspect websites. I have since ran the website in question through a few online malware scanners and its comes up clean everytime. However im not convinced. Do any of you guys know extensive ways i can check that the server isn't still compromised. I have no way to install any malware scanners or anti virus programs on the server as it is horribly locked down by Just Host.

    Read the article

  • What is the simplest way to generate domain specific url from application path..?

    - by harsh
    I have application specific url like below ~/Default.aspx ~/Manage/Page.aspx ~/Manage/Account/Default.aspx I really don't know what are these kind of paths actually called. Now I need them to convert to domain specific complete URL. No ../ or ../../ like thing in the URL. I want URLs like http://www.example.com/Default.aspx http://www.example.com/Manage/Page.aspx http://www.example.com/Manage/Account/Default.aspx Currently I am doing this following way (assuming I have HttpRequest object) Request.Url.Host + path.Substring(1); Is there a more simplest way to achieve this..?

    Read the article

  • cleaning up pdftotext font issues

    - by mankoff
    I'm using pdftotext to make an ASCII version of a PDF document (made with LaTeX), because collaborators prefer a simple document in MS word. The plain text version I see looks good, but upon closer inspection the f character seems to be frequently mis-converted depending on what characters follow. For example, fi and fl often seem to become one special character, which I will try to paste here: ? and ?. What is the best way to clean up the output of pdftotext? I am thinking sed might be the right tool, but am not sure how to detect these special characters.

    Read the article

  • What's the easiest way to allow Exchange 2003 remote (no MSO client) users check their Mailbox size?

    - by Myrddin Emrys
    We are migrating from Exchange 2003 with no quota settings to Exchange 2010 with limited mailbox sizes. We are trying to get users to clean their mailboxes prior to the move to reduce the transfer load, as well as to comply with new quotas on the 2010 system. But many users access their mail through webmail only. I cannot see a way for users to access their mail store size in this manner. Has anyone else run into this problem? Is there a good way to easily let users check their own mailbox size? The only thing I've come up with as a workaround is a report that IT generates and mail-merge it out to users daily with their current mailbox size. This is cumbersome and time consuming compared to a way for them to check their own mailbox size however.

    Read the article

  • Outlook 2007 attachments wont preview

    - by Slace
    When I get an email with an attachment in Outlook 2007 that I should be able to preview (ie - image, word doc, etc) I get the following error: This file cannot be previewed because of an error in the following previewer: Microsoft Outlook image previewer To open this file in its own program double-click it. (obviously the previewer changes name depending on the type) The computer is a clean install (it's only 5 days old) and I remember I was at one point able to preview within Outlook. I'd really like this feature to be back working, I've tried the usual exit Outlook, restart Windows, etc but to no avail.

    Read the article

  • Good book(s) for MMORPG design & implementation?

    - by mawg
    I am a long time professional C/C++ programmer (mostly embedded systems) and a hobbyist windows & php hacker. Can anyone recommend a book(s) specifically aimed at designing and (hopefully) implementing an MMORPG? I don't need general how to design or how to code books. Maybe a really good generic games book, but I am not interested in 1st person shooters, I want to know what it takes to implement an MMORPG. Good books, maybe also good URLs. Thanks just searching eBay and Amazon threw up a whole slew of books. Amazon's customer reviews give me an idea of how good they are, and the overview tells me what areas they cover

    Read the article

  • Firefox, Thunderbird, Safari - hangs up on start

    - by takeshin
    I have a strange problem on Windows Vista (Automatic updates). Long time ago I had installed Firefox, Thunderbird, Safari and Opera and they all worked well. Once upon a time, after Windows restart, Firefox, Thunderbird and Safari won't start. [Opera (USB) works fine]. When I start the browser, program name is listed on the processes list, but it is not activated, doesn't show any window. I tried crating new profiles with -ProfileManager, restarting Windows, reinstalling the browsers. I scanned the system for suspected programs and it looks clean. There is 45GB of free HDD space. WTF?

    Read the article

  • need some jquery if-else statement help

    - by zeemy23
    Hello, the code below is broken, but I'm not sure how. I've definitely made some big assumptions here as a newbie. I'm basically trying to create an if else where imBannerRotater functions on #cast if the variable is true and #pram if it is false. How could I fix this to get that result? The # are URLs. Thanks!-zeem $(document).ready(function(){ if (mmjsRegionName == 'CO') { $("#cast").imBannerRotater({ return_type: 'json', data_map: { image_name: 'name', url_name: 'url' }, image_url: '#', base_path: '#', }); } else { $("#pram").imBannerRotater({ return_type: 'json', data_map: { image_name: 'name', url_name: 'url' }, image_url: '#', base_path: '#', }); });

    Read the article

  • url to http request object

    - by takeshin
    I need to convert string like this: $url = 'module/controller/action/param1/param1value/paramX/paramXvalue'; to url regarding current router (including translation and so on). Usually I generate the target urls using url view helper, but for this I need to specify all params, so I would need to manually explode the string. I tried to use request object, like this: $request = new Zend_Controller_Request_Http(); // some code here passing the $url Zend_Debug::dump($request->getControllerName()); // null instead of 'controllers' Zend_Debug::dump($request->getParams()); // null instead of array but this seems to be suspected. Do I need to dispatch this request? How to handle this case well?

    Read the article

  • Restore SBS 2003 CompanyWeb calendar from flat files?

    - by BrandonS
    Ok so I have a job of recovering a calendar that was used for an event schedule. The situation is that there was never a backup done except for using Carbonite on the C drive. I have re-installed the server with the same server name and domain. I tried stopping the mssqlsharepoint service and over-righting the two DB files (STS_1.mdf and STS_1_log.LDF) and each one individually each time stopping and restarting the service. Now I knew before I started that this had a slim chance of working but I had to try because everything else i could find involved backing up before restoring and this just wasn't done. Please someone help it is driving me mad I tell you MADDDDDD. ps i am not the genius that set this up just the fool tasked to clean up the mess. :-)

    Read the article

  • using .htaccess to redirect from friendly url to actual file

    - by Kohalza
    I have the following RewriteRule in my .htaccess to redirect from a friendly url to my main application file: RewriteRule ^\/(.*).html$ home/www/page.php?p=$1 [L] This should send any url that points to a html page to page.php with the url as a parameter that will be parsed by the app. This works for urls that look like http://www.example.com/hello.html The problem is that I get a 404 error when the url contains a directory path, for example: http://www.example.com/category/hello.html The error reads: "File does not exist: /home/www/category" Seems it is first looking for the 'category' path instead of processing the .htaccess Any ideas how to solve this?

    Read the article

  • IIS: Content-Length 0 for CSS, Javascript and Images

    - by Adrian Grigore
    After a clean re-installation of my Windows 7 system, I can't get IIS 7 to properly deliver any static content. Dynmic content (ASPX pages and content served by ASP.NET MVC controllers) works fine, but static files such as CSS, Javascript and Images give me a 200 OK status code and a Content-Length of 0. The problem occurs with all web sites on my server, even a brandnew ASP:NET MVC template project with no changes. It also occurs with both Firefox and IE 8. What could possibly be the problem? Note that I also installed the latest Windows Azure SDK, perhaps that messed up some settings. But I don't know how to proceed with for troubleshooting this.

    Read the article

  • Compressing a hex string in Ruby/Rails

    - by PreciousBodilyFluids
    I'm using MongoDB as a backend for a Rails app I'm building. Mongo, by default, generates 24-character hexadecimal ids for its records to make sharding easier, so my URLs wind up looking like: example.com/companies/4b3fc1400de0690bf2000001/employees/4b3ea6e30de0691552000001 Which is not very pretty. I'd like to stick to the Rails url conventions, but also leave these ids as they are in the database. I think a happy compromise would be to compress these hex ids to shorter collections using more characters, so they'd look something like: example.com/companies/3ewqkvr5nj/employees/9srbsjlb2r Then in my controller I'd reverse the compression, get the original hex id and use that to look up the record. My question is, what's the best way to convert these ids back and forth? I'd of course want them to be as short as possible, but also url-safe and simple to convert. Thanks!

    Read the article

  • Two users using the same same user profile while not in a domain.

    - by Scott Chamberlain
    I have a windows server 2003 acting as a terminal server, this computer is not a member of any domain. We demo our product on the server by creating a user account. The person logs in uses the demo for a few weeks and when they are done we delete the user account. However every time we do this it creates a new folder in C:\Documents and Settings\. I know with domains you can have many users point at one profile and make it read only so all changes are dumped afterwords, but is there a way to do that when the machine is not on a domain? I would really like it if I didn't have to remote in and clean up the folders every time.

    Read the article

  • Isopropyl okay to use on MacBook screen?

    - by Archagon
    I've always used a mixture of isopropyl alcohol and distilled water to clean my computer screens (50% water and 50% * 70% isopropyl). From what I understand, these are exactly the same ingredients used in most commercial screen cleaners, perhaps even more diluted. I recently used this solution to wipe off my 2010 MacBook Pro screen, and there don't seem to be any problems, but this support page explicitly says not to use isopropyl. Now I'm worried that I might have inadvertently damaged something. I'm also concerned because I once managed to dissolve the surface rubber lining of one of my mice with the isopropyl solution, and the MacBook Pro display has a thin rubber bezel keeping the glass in place. Why would Apple single out isopropyl on their support page? Should I be concerned?

    Read the article

  • Does urllib2.urlopen() actually fetch the page?

    - by beagleguy
    hi all, I was condering when I use urllib2.urlopen() does it just to header reads or does it actually bring back the entire webpage? IE does the HTML page actually get fetch on the urlopen call or the read() call? handle = urllib2.urlopen(url) html = handle.read() The reason I ask is for this workflow... I have a list of urls (some of them with short url services) I only want to read the webpage if I haven't seen that url before I need to call urlopen() and use geturl() to get the final page that link goes to (after the 302 redirects) so I know if I've crawled it yet or not. I don't want to incur the overhead of having to grab the html if I've already parsed that page. thanks!

    Read the article

  • Windows 7 Long Delay on Login or Unlock

    - by Adam Driscoll
    I have a clean install of Windows 7 x32 running on my HP DV6449us and am experiencing a really long delay (10+ seconds) when unlocking or logging into my computer. The same issue was happening with UAC but I was able to turn that off. I realize this is a security risk but couldn't take it any more. I've read about this being video driver related but have updated the drivers to the newest I could find for the GeForce Go 6150. Anyone else experiencing this? My desktop is very happy but he's sporting a Nvidia 260 GT. Is it just the lack of firepower?

    Read the article

  • Best way to store data for Greasemonkey based crawler?

    - by Björn
    I want to crawl a site with Greasemonkey and wonder if there is a better way to temporarily store values than with GM_setValue. What I want to do is crawl my contacts in a social network and extract the Twitter URLs from their profile pages. My current plan is to open each profile in it's own tab, so that it looks more like a normal browsing person (ie css, scrits and images will be loaded by the browser). Then store the Twitter URL with GM_setValue. Once all profile pages have been crawled, create a page using the stored values. I am not so happy with the storage option, though. Maybe there is a better way? I have considered inserting the user profiles into the current page so that I could all process them with the same script instance, but I am not sure if XMLHttpRequest looks indistignuishable from normal user initiated requests.

    Read the article

  • Choosing a Portal / CMS software for developing multi brand websites?

    - by hbagchi
    We are in the early stage of overhauling a multi-brand website built using a custom developed java mvc framework to enable web 2.0 features. Built-in features we are looking at are: i18n, sso, content search and indexing, personalization, mashup support, ajax support, rich media content storage and management support, friendly to search engine optimizations, bookmarkable URLs, support for social networking sites, support for page composition and decoration using templates. A combination of these features are supported by many portal and cms software. Any insights will be very helpful in using a portal/cms combination to address this requirements! This is a follow-up on this post focusing on the portal/cms angle

    Read the article

  • What are the main differences between: Seaside vs Aida vs Iliad

    - by elviejo
    What are the differences between the three Smalltalk web application frameworks? Some starting points: What is the sweet spot for each framework? in Which case would you use one or the other? What are their weaknesses? Which one has the cleanest URLs? How do they handle Ajax? Do they have some preference in their use of persistence? I'm just trying to decide which framework is appropriate for each kind of application.

    Read the article

  • Using md5_file(); doesn't return the md5 sometimes?

    - by Rob
    <?php include_once('booter/login/includes/db.php'); $query="SELECT * FROM shells"; $result=mysql_query($query); while($row=mysql_fetch_array($result, MYSQL_ASSOC)){ $hash = @md5_file($row['url']); echo $hash . "<br>"; } ?> The above is my code. Usually it works flawlessly on most urls, but every now and then it will just skip the md5 on a line, as if it doesn't retrieve it, even though the file is there. I can't figure out why. Any ideas?

    Read the article

  • Missing 16:10 resolutions with Nvidia drivers (Can't add resolutions)

    - by Wuinny
    Hello, I have a laptop with a Nvidia 9650M GT and used the drivers that Seven brought me. It works fine but Metro 2033 tells me that i have to upgrade my drivers to play the game. So i did it. But since i did a clean install of the new Nvidia drivers, i just have 1440*900 or 4:3 resolutions. I usually played with 1280*800 or 1184*740 (for performance issue) With the "old" drivers i was able to create custom resolution (1184*740) in Nvidia control panel but now when i try it tells me that "my monitor cannot support this resolution". When i insist, it works but soon as i shut down my computer i have to recreate it.. Do anyone have a fix ? Thank you

    Read the article

  • Facebook canvas app showing wordpress blog in iframe - how to link to specific pages?

    - by James Olney
    Sorry for the cumbersome title. I have a Facebook canvas app that is call up a Wordpress site in an iFrame. This is fine and works as hoped but is there a way to link directly to a page within it? For example the page I want to reach is actually: www.blog.com/monkeys/ But I want to be able to use a link like this: www.facebook.com/monkeyblogapp/app_243495067635997/monkeys/ So that I can give out that URL and get people to see the right bit of content but within the Facebook environment. I have seen people mention callback urls (like this one which as far as I can tell just means having the app addresses match the website Direct links to pages in Facebook iFrame application?) but that doesn't work. Any suggestions? many thanks James

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >