Search Results

Search found 21908 results on 877 pages for 'content catalog'.

Page 720/877 | < Previous Page | 716 717 718 719 720 721 722 723 724 725 726 727  | Next Page >

  • wget crawling search results of news website

    - by kiltek
    I am trying to crawl the search results of a news website using wget. The name of the website is www.voanews.com. After typing in my search keyword and clicking search, it proceeds to the results. Then i can specify a "to" and a "from"-date and hit search again. After this the URL becomes: http://www.voanews.com/search/?st=article&k=mykeyword&df=10%2F01%2F2013&dt=09%2F20%2F2013&ob=dt#article and the actual content of the results is what i want to download. To achieve this I created the following wget-command: wget --reject=js,txt,gif,jpeg,jpg \ --accept=html \ --user-agent=My-Browser \ --recursive --level=2 \ www.voanews.com/search/?st=article&k=germany&df=08%2F21%2F2013&dt=09%2F20%2F2013&ob=dt#article Unfortunately, the crawler doesn't download the search results. It only gets into the upper link bar, which contains the "Home,USA,Africa,Asia,..." links and saves the articles they link to. It seems like he crawler doesn't check the search result links at all. What am I doing wrong and how can I modify the wget command to download the results search list links (and of course the sites they link to) only ?

    Read the article

  • SharePoint 2010 Enterprise wiki - [New page] missing

    - by icelava
    I am trying to ramp up knowledge on SharePoint deployment and usage (never did before), due to a direction to use SharePoint 2010 as a repository platform (wiki format) for our customer's infrastructure documentation. In my test virtual server, a new site of Enterprise wiki template was setup. Went into Site Actions Manage Site Features to activate Wiki Page Home Page. The default sub-web then went from /Pages to /SitePages and looks like the default Team template. The odd thing is the Site Actions is missing the New Page option. My colleague does not understand why this is the case, as it ought to be there. The original /Pages sub-web does have the option. What conditions are in play that influences the appearance of that option? UPDATE Another phenomenon observed is in the Site Actions View All Site Content view, the wiki document libraries listed in the grid will have their hyperlink (e.g. "Site Pages") lead straight to the direct default page. It would not show its own table listing of pages under that document library, unlike the original Pages document library, which expectedly show up as a listing. I wonder if this hints to any problems.

    Read the article

  • Selectively allow unsafe html tags in Plone

    - by dhill
    I'm searching for a way to put widgets from several services (PicasaWeb, Yahoo Pipes, Delicious bookmarks, etc.) on the community site I host on Plone (currently 3.2.1). I'm looking for a way to allow a group of users to use dangerous html tags. There are some ways I see, but I don't know how to implement those. One would be changing safe_html for the pages editors own (1). Another would be to allow those tags on some subtree (2). And yet another finding an equivalent of "static text portlet" that would display in the middle panel (3). We could then use some of the composite products (I stumbled upon Collage and CMFContentPanels), to include the unsafe content on other sites. My site has been ridden by advert bots, so I don't want to remove the filtering all together. I don't have an easy (no false positives) way of checking which users are bots, so deploying captcha now wouldn't help either. The question is: How to implement any of those solutions? (I already asked that on plone mailing list without an answer, so I thought I would give it another try here.)

    Read the article

  • Virtual host redirects to localhost in Ubuntu

    - by Salman
    I have recently configured Virtual Host in my Ubuntu 11.10. But whatever site i type, it always redirects to the localhost page. This is my "our-test-site" file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/zftut/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/zftut/public/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> and this is my, "etc/hosts" file: 127.0.0.1 localhost 127.0.0.1 our-test-site.local 127.0.0.1 zftut.local 127.0.1.1 System.B System Now when I try to go for "zftut.local", it redirects me to localhost page, showing me this: It works! This is the default web page for this server. The web server software is running but no content has been added, yet. What am I doing wrong? I refered "this" tutorial for setting up virtual host.

    Read the article

  • How do I replicate Gmail filtering (forwarding mostly)?

    - by projectdp
    I have reached the limits of Gmail forwarding. Before there was no need to verify forwarding addresses. It's a problem for me now because the addresses I want to forward to are not natural inboxes but automated systems with no way to track the verification email contents. I want to set this up for example: mobile - email - facebook-email - flickr-email - tumblr-email - posterous-email How do I do this without Gmail filters? I think I need to use fetchmail to watch my inbox and then autoforward to the above addresses. Is fetchmail the best solution to this issue? Any other MRA's? I'd like to do some more complicated things with the emails in an automated fashion too, how would I go about monitoring the inbox, doing some actions to the email before forwarding, and forward everywhere? prerequisites: a server: fetchmail daemon to poll the account local mailbox script to clean & forward appropriately (python probably) sendmail + ~/.forward file backup email account (Gmail probably) Any help would be greatly appreciated. I'm trying to automate my social content distribution.

    Read the article

  • Revover original email from Sendmail log

    - by Xavi Colomer
    I have a website the contact form has been failing silently for two weeks (Wordpress + Contact form 7). Apparently updating to Contact Form 7 made the assigned email to fail [email protected], I also tested with [email protected] and it also failed until I tried with gmail and it finally worked. Apparently @telefonica.net and @me.com domains are not working with this version of the plugin, but I have to investigate the cause. I found the logs of the lost emails, but I would like to know If I can recover the sender or the content of the original messages. May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: from=www-data, size=3250, class=0, nrcpts=1, msgid=<[email protected]_web.com>, relay=www-data@localhost May 24 23:41:11 localhost sm-mta[27655]: s4P3fBdA027655: from=<[email protected]>, size=3359, class=0, nrcpts=1, msgid=<[email protected]_web.com>, proto=ESMTP, daemon=MTA-v4, relay=localhost.localdomain [127.0.0.1] May 24 23:41:11 localhost sendmail[27653]: s4P3fBc3027653: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=33250, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s4P3fBdA027655 Message accepted for delivery) May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=00:00:01, xdelay=00:00:01, mailer=esmtp, pri=123359, relay=tnetmx.telefonica.net. [86.109.99.69], dsn=5.0.0, stat=Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fBdA027655: s4P3fCdA027657: DSN: Service unavailable May 24 23:41:12 localhost sm-mta[27657]: s4P3fCdA027657: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30000, dsn=2.0.0, stat=Sent Thanks

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

  • Green flickering pixels that move with black images

    - by user568458
    Strange question... Occasionally, on my LCD screen, pixels that should be black flicker rapidly and constantly between black and green, about 4 flickers a second. The crazy part is, unlike dead/stuck pixels, they are relative to content on the screen and move with it. For example, I might be looking at a web page with a picture that has lots of black. There might be a couple of green flashing pixels in that black that shouldn't be there. I scroll the page, and the green flickering pixels move with the image. It seems that everyphysical pixel is fine, but somehow something interprets part of the image in a way that causes flickering green... It's not just in a web browser. My first thought was to blame a trolling blogger cunningly uploading an animated gif that simulates a failing pixel... but it happens in a wide range of applications. It seems to occur randomly, other than that it seems to only occur in areas of pure black, and it's always pure 100% green. It happens rarely enough that it's not a big deal, but it's such a strange problem it bugs me. I can't find any info on anything like this. I'm not even sure if it's hardware or software. Any ideas? (windows 7 laptop connected to LCD by DVI to HDMI cable)

    Read the article

  • How do you enable view source in ie8 when it gets magically diabled

    - by Tim Meers
    I have multiple computers that all seem to have View Source disabled from the content menu when you right click on a web page. Now I know it's not that the web page is some how disabling it, I'm pretty sure thats not even possible. But alas I have at least 3 machines in my office (not on AD) that have this problem. I have also worked on clients computers that have this same issue. It's down right maddening! I tried to Google for it, but it just shows results from the dawn of IE6 in all of it's "glory" with a bug where if the cache was full it would be disabled. But this is not the case in IE8. Any body have a clue why this is happening, or a fix for it? Maybe a reg setting? Update: So I got a little closer to solving it, but there was still an issue on one computer where it allowed it not is HTTP, but not in HTTPS. One other computer works correctly in both. I Found these two keys missing in the registry: [-HKEY_CURRENT_USER\SOFTWARE\Microsoft\Internet Explorer\View Source Editor] [-HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\View Source Editor]

    Read the article

  • IIS 7.5 truncating POST body containing JSON data with ASP.NET MVC 3

    - by Guneet Sahai
    I'm facing a problem which I hope is a configuration thing with IIS but is right now giving a lot of trouble. Basically I have a controller that accepts a JSON and does some processing. While it generally works fine, but every now and then when the system has some load I get an error. After some painful debugging, we figured the incoming JSON gets truncated which causes the deserialzer to fail. To narrow down the problem - we wrote a simple controller that accepts a JSON and tries to deserialize it. In case it fails it just logs it. This works fine but when I hit it using a load testing tool (JMeter) it throws the same error (truncation) for a few requests. The # of failures increased when I increase parallel connections. It starts showing with 150 concurrent requests. We are running IIS 7 on windows 2008 server with ASP.Net MVC 3 with more or less default configuration of IIS. More information available in my question below http://stackoverflow.com/questions/12662282/content-length-of-http-request-body-size

    Read the article

  • Service nginx reload: unexpected error

    - by Anna
    I'm trying to install wordpress on my nginx server by following this tutorial: http://premium.wpmudev.org/blog/how-to-setup-your-own-nginx-powered-wordpress-server/ However, the last command at step 7 gave me a strange error: service nginx reload A copy-paste from my terminal: root@server:~# service nginx reload Reloading nginx configuration: nginx: [emerg] unexpected "o" in /etc/nginx/sites-enabled/wordpress:7 nginx: configuration file /etc/nginx/nginx.conf test failed When I nano into sites-enabled/wordpress, on the 7th line I can't find anything strange: <!DOCTYPE html> <html class=" "> <head prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb# object: http://ogp.me/ns/object# article: http://ogp.me/ns/article# profile: http://ogp.me/ns/profile#"> <meta charset='utf-8'> <meta http-equiv="X-UA-Compatible" content="IE=edge"> Also, I don't see any obvious errors in my nginx.conf file, but maybe I'm not checking something? The first couple of lines of the nginx config file: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } Any help is appreciated, thanks a lot in advance!

    Read the article

  • IIS crashes with unhandled exception in ASP.NET

    - by SnowCrash
    We had an issue recently with an unhandled exception in an ASP.NET C# application bringing down IIS and all application pools that it was hosting. IIS Manager was unable to restart or stop/start the service and I was unable start IIS again after killing w3wp.exe in the task manager. A system reboot restored IIS to a running state; as a primarily Linux admin, I generally consider an unplanned system reboot to resolve a software error to be an act of high heresy. Is there a way to "harden" IIS so that a faulting application does not affect anything but the request that exposes the fault? Some details on the server and application fault. IIS: 7.5 .NET: 4.0 Windows Server 2008 R2 Faulted on call to System.Net.Dns.Resolve() with a url pointing to a non-existant domain as the argument. (I'm aware that this method is deprecated but the point that a page code issue shouldn't bring down the server still stands) The exception generated was SocketException. The faulting module according to event viewer was KERNELBASE.dll The issue was resolved by wrapping the call in a try-catch, logging the exception and displaying some generic content on the page. I'm hoping that I missed something in the IIS config that would switch it to "production" mode or something.

    Read the article

  • How should I configure nginx caching headers for a "baked" static file blog? (Octopress)

    - by Doug Stephen
    I recently deployed an Octopress blog (which is a blogging platform built around Jekyll). It's a static-site blog generator, with no dynamic content or databases to muck about with. It's being served up by nginx. My question is, what is the appropriate expires directive or Cache-Control header that I should set to make sure that visitors get the most up-to-date version of the site when they visit without having to manually refresh? Since the site is just .html files it seems to get cached pretty aggressively. I've tried a million different combinations of expires modified + xxxx and even straight up expires off but I can't seem to wrap my head around it. I'm very new to dealing with caching like this, specifically, on static files that change frequently, and obviously if the site hasn't been changed then I'd like for it to be served up out of the cache. Update (still not solved though): I found open_file_cache, tweaked that. Still no dice. It seems like what I might want to do is use nginx as a proxy cache and use Apache with ETags? Is there really no convenient way to make nginx play nicer with conditional requests from the client? TL;DR: I'm running a static-file blog and I'd like to set up nginx to only serve from the cache if the blog hasn't been updated recently, but I'm too stupid to figure it out myself because I'm relatively new to web servers.

    Read the article

  • Ubuntu networking issue: two specific machines cannot browse web while connected to network at the same time.

    - by jensendarren
    I have setup a secure wireless network which works very well except for two laptops running Ubuntu 10.10 that can't access the Internet via a browser at the same time. They can both ping sites, wget sites, use skype but when using a browser the page never loads (in Firefox the status bar just sits there saying "Connecting" until it times out.) Here is what we have tried so far (nothing has fixed this issue): OpenDNS Restart networking services Using wired connection rather than wireless Removing all other nodes from the network except the two machines that have this issue Swapped out the router Factory reset the router Reformatted one of the machines and re-installed Ubuntu 10.10 Other things that we have checked: The two machines can connect simultaneously without any issues to other wireless networks in different locations (say in an Internet Cafe or another office) The two machines have unique IP addresses The two machines have unique MAC addresses The two machines can communicate on the network using Skype, wget, ping etc We are not using a proxy on either machine FYI: I have attached output from wireshark. For the test we turned both machines on and pointed them both to the same website. The content loaded on one and not the other. Here is the output from wireshark- (speedyshare.com/files/26228631/machine_output_1 && speedyshare.com/files/26228649/machine2). As you can see the first one worked, the second one didn't. I don't fully understand the output and would appreciate if someone could shed some light on what might be causing this and how we can fix it! Many thanks! Darren

    Read the article

  • how to set up dual wireless routers (G and N) with single broadband connection

    - by user15973
    A local cafe has an 802.11g wireless router attached to either an 8- or 12Mbs broadband connection. Although many customers still have -g devices, an increasing number are showing up with 802.11n devices (e.g. Mac laptops, iPads). The cafe owner is content with his router's area coverage, but he would like for his customers to be able to take advantage of the higher -n download speeds. He could simply replace the -g router with a -n router, but reportedly -n routers slow down considerably when servicing both -g and -n connections. Another option would be to buy the -n router, run an Ethernet cable between the two routers, and (of course) hook up the broadband connection to one of the two routers. Still another option would be to attach an ordinary, wired switch to the broadband connection and then attach both routers to the switch. Aside from the cost issue, what are the tradeoffs of those approaches? Which would you recommend, and why? Is there a better option than the ones I mentioned? Thank you in advance for your advice.

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • Ubuntu 10.04->10.10 in failed state - how to recover?

    - by Harvey
    I was running Ubuntu 10.04 and attempted to upgrade to 10.10. I have a really slow connection (DSL 128kbits/sec) and copying the upgrade files took about 26 hours. I of course let it run unattended. When I came back, I notice the following 3 dlgs: (1) "Could not install the upgrades The upgrade has aborted. Your system could be in an unusable state. A recovery will run now (dpkg -- configure -a)." (2) "gpk-update-icon Distribution upgrades available maverick 10.10 (stable) [more information] [Do no show this again] [Cancel] [Ok]" (3) "gpk-update-icon Security updates available The following important updates are available for your computer: libwebkit-1.0-2-dbg - Web content engine library for Gtk+ - Debugging symbols libcupsimage2 - Common UNIX Printing System(tm) - Raster image library ..." What is the best response to all of this? I went through something similar in an attempted network upgrade from 8.04 to 10.04 and had to reload the unbootable machine fresh from distribution media (all data was lost). I'd like to avoid that here. I have not yet responded to the dialogs, and want to make sure the system is still bootable and not lose my data this time.

    Read the article

  • Nginx rewrite for link shortener + Wordpress pretty URLs

    - by detusueno
    Okay so I installed Nginx/PHP/MySQL/Wordpress via a online walk through, and it had me enter these rewrites to enable Wordpress pretty URLs: if (-f $request_filename) { break; } if (-d $request_filename) { break; } rewrite ^(.+)$ /index.php?q=$1 last; error_page 404 = //index.php?q=$uri; This is then included in the vhost for my domain. What I'm trying to do now is add some redirection/link shortner rewrites that will play nice with the setup I have in mind. I'd like to redirect "x.com/y" to "x.com/script.php?id=y" for all external links that I post. The Wordpress link setup right now has almost all internal links begin with "news" (x.com/news/post-blah, x.com/news/category/1, etc) BUT I also have a few root links that point to some internal content (x.com/news, x.com/start). I'm guessing that's going to cause some conflicts. What's the best approach to do this? I've never worked with Nginx (or any rewrite rules) but maybe I can distinguish between "x.com/news" and "x.com/news/" to allow it to play nice? I had a friend setup a working version of this in Apache and it'd be nice if I could get this up on Nginx again.

    Read the article

  • Mailman delivery troubles

    - by stanigator
    I have apparently posted at the wrong place (superuser.com) for this question, so I'll just repost it here. Hope those of you who read both sites are not going to be offended. It's about mailing list management software called Mailman from GNU. Here are the details: Hosting provider: Vlexofree Domain: www.sysil.com with Google Apps Mailing List created from hosting cpanel: [email protected] I have registered a list of subscribers, and tried sending an email to [email protected]. I got the following error message: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://mail.google.com/support/bin/answer.py?answer=6596 23si6479194ewy.44 (state 14). ----- Original message ----- MIME-Version: 1.0 Received: by 10.216.90.136 with SMTP id e8mr1469147wef.110.1264220118960; Fri, 22 Jan 2010 20:15:18 -0800 (PST) Date: Fri, 22 Jan 2010 20:15:18 -0800 Message-ID: <[email protected]> Subject: From: Stanley Lee <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=0016e6dab0931bccc3047dcd2f1e - Show quoted text - Is there any way of fixing this problem? I would like to be able to have this mailing list to work through my hosting and domain. Thanks in advance.

    Read the article

  • Multiheaded X.org with a single workspace-pool

    - by blauwblaatje
    I've got an idea for x.org/$randomwindowmanager in combination with a multiheaded setup, but I haven't figured out how it should work. Also I don't really know where to place the feature request. Now for the idea. I've been working with screen (wikipedia:GNU_Screen) for some years now. One thing I like about it, is the fact that I can get a multi-display mode (screen -x), so you can have multiple terminals all connected to the same screen. The fun thing about it, is that you can get 2 terminals with the same content and switch my onscreen layout, without moving the terminals. I admit, in screen it's not extremely useful, but I think for a wm it can be. Imagine this. You've got two monitors and 4 workdesks. On one workdesk I've got my IDE with code, on the second one I've got the output, on the third one I've got the documentation and on the forth one I've got my e-mail and IM clients. At one moment, I want my IDE and output on my monitors, another moment my code and documentation and Yet another moment my IM to consult a colleague and documentation or code. Finally my colleague comes to help me at my desk. I'd like it if we could both watch the same workdesk without him sitting on my lap, so I turn one monitor so he can see it better. It would be great if we could see the same thing that's on my monitor (exclude mousepointer). The thing with most WMs is that your workspaces on the two monitors are either separated or glued together. If they're separated, you can change workspaces on each monitor autonomous, but you can't exchange applications between monitors because they're different x-clients (iirc). If they're glued together (xinerama), you can exchange the applications, but when changing your workspace, the other monitors change too. So, what I'd like to know is this. Is this already possible or should I submit a feature request somewhere (and if so, where?)

    Read the article

  • Accidentally dd'ed an image to wrong drive / overwrote partition table + NTFS partition start

    - by Kento Locatelli
    I screwed up and set the wrong output for dd when trying to copy a freenas iso, overwriting the wrong external hard drive. Ironically, I was trying to setup a freenas server for data backup... External drive is only used for data storage, system is entirely intact Drive had a single NTFS partition filing the entire device (2TB WD elements) Drive originally had an MBR partition table. Drive now shows as having a GPT, presumably from the freenas image. Drive was mounted at the time, with maybe a couple kB of data written/read after running dd Drive is just a few months old and healthy (regular SMART / fs checks) I have not reboot the OS (crunchbang) /proc/partition still holds the correct information (and has been stored) Have dd's output (records in / out / bytes) testdrive did not find any partitions on quick or deep search running photorec to recover the more important data (a couple recent plaintext files that hadn't been backed up yet). Vast majority of disk content ( 80%) is unnecessary media files. My current plan is to let photorec do it's thing, then recreate the mbr with gparted and use cfdisk to create another NTFS partition using the sector information from /sys/block/.../. Is that a good course of action (that is, a chance of success)? Or anything else I should try first? Possibly relevant information: dd if=FreeNAS-8.0.4-RELEASE-p3-x86.iso of=/dev/sdc: 194568+0 records in 194568+0 records out 99618816 bytes (100 MB) copied grep . /sys/block/sdc/sdc*/{start,size}: /sys/block/sdc/sdc1/start:2048 /sys/block/sdc/sdc1/size:3907022848 cat /proc/partitions: major minor #blocks name ** Snipped ** 8 32 1953512448 sdc 8 33 1953511424 sdc1 current fdisk -l output: WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 2000.4 GB, 2000396746752 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table

    Read the article

  • Postfix character encoding?

    - by Camran
    I use Postfix as a mailserver. I have Ubuntu OS. Then I use PHP to send emails. Problem is that none of my emails are encoded properly by a mailsoftware which my VPS provider uses. According to them, the problem lies with me. It is only the name field which isn't encoded properly. For example "Björn" becomes "Björn" in my emails. However, when I echo the $name, it outputs "Björn" which is correct. Also, gmail and hotmail does show it correctly. The strange part is that the "text" (the message itself) is encoded properly. I use the following for sending mail: $headers="MIME-Version: 1.0"."\n"; $headers.="Content-type: text/plain; charset=UTF-8"."\n"; $headers.="From: $name <$email>"."\n"; $name= iconv(mb_detect_encoding($name), "UTF-8//IGNORE//TRANSLIT", $name); //// I HAVE TRIED WITH AND WITHOUT THE LINE ABOVE, NO DIFFERENCE mail($to, '=?UTF-8?B?'.base64_encode($subject).'?=', $text, $headers, '[email protected]'); I have tried with and without the iconv line also, no luck. The last thing I can think of is POSTFIX, could there be a setting for character encoding there? Anybody knows?

    Read the article

  • One server with multiple desktops "heads" with VNC

    - by Alexis K
    I am managing a system of kiosks. Each kiosk currently is running a web browser with the application for the kiosk running in the browser. Each kiosk needs to be able to display separate content. At times, the application running in the web browser freezes. Thus, I have to go out to the site to refresh the page. I want to see if there is a way to have one central server that has multiple browser heads. Then each kiosk would run a program like VNC to display one of the heads. This way when the program freezes, I just have to login to the central server and refresh the page. Getting VNC or another remote desktop software installed on the clients is no problem. What I am looking for is a way to have VNC remote into a specific head of a head of a web browser. Does such a thing exist? Or do I have to run a VM for each kiosk to remote into? Any advice, pointers, or solutions would be helpful.

    Read the article

  • How to pin "Visual Studio 2010 Documentation" shortcut to Windows 7 taskbar?

    - by Chris W. Rea
    I just installed Microsoft Visual Studio 2010 at home, on my Windows 7 PC. One of the items installed with VS2010 is "Microsoft Visual Studio 2010 Documentation". I like to have the documentation installed locally and at my fingertips, and so before had always added a shortcut for the help viewer to my Quick Launch toolbar. However, I'm not able to pin the new documentation to the Windows 7 taskbar. It's frustrating. Note carefully: When I launch "Microsoft Visual Studio 2010 Documentation" from the Start menu, it seems to perform two functions: First, it launches the "Help Library Agent", which is a local HTTP server from which the help content is served... similar to the local ASP.NET web development server. Second, it launches the default web browser against the localhost URL corresponding to the port on which the "Help Library Agent" is running, for example: http://127.0.0.1:47873/help/1-1444/ms.help?method=f1&query=msdnstart&product=VS&productVersion=100&locale=en-US ... in other words, the program doesn't leave behind an active foreground process that displays in the taskbar. So, I can't choose "Pin this program to taskbar" as one might do so with a typical program. How can I get a shortcut to "Microsoft Visual Studio 2010 Documentation" in the Windows 7 taskbar? Has anybody got a workaround for this?

    Read the article

< Previous Page | 716 717 718 719 720 721 722 723 724 725 726 727  | Next Page >