Search Results

Search found 27632 results on 1106 pages for 'site definition'.

Page 124/1106 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • IIS7 Default Document not working

    - by TooFat
    I have a website running on IIS 7 that has the default document on the Web Site Level set to only index.php. If I right click on the Web Site in IIS Manager and select Explore I see that the index.php file is there. If I just browse to the web site like http://my.site.com I get the default IIS 7 logo with "Welcome" in a bunch of diff languages. If I go to http://my.site.com/index.php it brings up the site just fine. I have stopped and started the Web Site and ran iisreset but still no luck. The Default Document Section of Web.config looks like this <defaultDocument> <files> <clear /> <add value="index.php" /> </files> </defaultDocument> What am I missing?

    Read the article

  • iStack for iPad

    - by Jonathan
    The below image is the current, and hopefully final, look of the front screen, the app will remember which stack site you have chosen so the user will only see this screen the first time, but they can always go back and change it. The red bar is only there when a new site is added (the StackOverflow is just a test as no site has been added since I implemented this) and can be gotten rid of by tapping the X on the right side (which isn't in the screenshot). Each column now has an edit button where the sites can be rearranged and moved from favourites to non favourites and this is persistent between app launches, moving a site in one column moves it in all of them. I've removed site icons in order to put them in properly so they load lazily and theres no UI freezing. Printing Functionality all implemented and working, thanks to systempuntoout's stackprinter.com works with AirPrint, which means this app will be iOS 4.2 minimum. Current features: 3 columns Email link to question, open in safari and copy link actions History for both questions and sites visited In app notification (red bar at the top) when a new site makes it into beta. StackPrinter in the app, without needing safari, and AirPrint functionality. Facebook Intergration Planned: Local Favourites Watching (a list of questions your watching, like short term favourites, with eventually push notifications) Web service to access local favourites and watches on non-iPad devices. Twitter integration. Safari bookmarklet to open question in iStack from safari In app notification for when site progresses from beta to normal. In app notification history

    Read the article

  • Differences in memory consumption between two identical D7 sites?

    - by aendrew
    I'm running Drupal on a news site that has a lot of different View blocks on the front page (~5 total, all cached). In trying to reduce the memory footprint of the site, I've checked out source from SVN to a local development install to try and convert some of those blocks into more optimized code. Here's the weird thing. Devel module lists memory consumption at 50mb on the Production site (Running Nginx, PHP 5.2.17, XCache and Zend Optimizer.) but only 14mb on my development site (Running Apache2, PHP 5.2.13 and XCache). These are nearly-identical versions of the same site — frankly, the Production site should use even less memory as I've disabled some of the modules running on the Dev site. Any idea why this might be the case?

    Read the article

  • Is it actually possible to programmatically manage the state of an FTP server in IIS7?

    - by nbolton
    I'm able to manage FTP sites via the IIS manager, however, all attempts so far to manage the state of FTP sites using other means have failed, including: Using the IIS7 API (Microsoft.Web.Administration) Using WMI (with IIS6 compatibility enabled) Using the AppCmd tool in System32\inetsrv Related questions: Why am I unable to get Site.State for an FTP site, when using Microsoft.Web.Administration? Are there any workarounds I haven't tried? My objective is to manage (start/stop/query the state of) the FTP sites with C# code (as you can see from the 3 above attempted workarounds). When querying the FTP server state using WMI, it returns code 4, which means "Stopped", even though the site is definitely shown as running in IIS manager. AppCmd is useless, as it returns "Unknown" for FTP sites: c:\Windows\System32\inetsrv>appcmd list site SITE "Default Web Site" (id:1,bindings:http/*:80:,state:Stopped) SITE "Default FTP Site" (id:2,bindings:ftp/*:21:,state:Unknown)

    Read the article

  • i don't receive mail notification...Nagios Core 4

    - by alessio
    I have a problem with automatically mail notification in Nagios Core 4 installed on ubuntu 12 .04 lts server... i have tried to send mail with nagios user and root user with the command: echo "test" | mail -s "test mail" [email protected] and i received mail correctly... but i don't receive any automatically mail notification... i don't know how can i do to resolve this issue! :( these are my configuration files (commands.cfg, contacts.cfg, nagios.log, mail.log): commands.cfg (the path /usr/bin/mail is the right path): # 'notify-host-by-email' command definition define command{ command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$ } # 'notify-service-by-email' command definition define command{ command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$ } # 'process-host-perfdata' command definition define command{ command_name process-host-perfdata command_line /usr/bin/printf "%b" "$LASTHOSTCHECK$\t$HOSTNAME$\t$HOSTSTATE$\t$HOSTATTEMPT$\t$HOSTSTATETYPE$\t$HOSTEXECUTIONTIME$\t$HOSTOUTPUT$\t$HOSTPERFDATA$\n" >> /usr/local/nagios/var/host-perfdata.out } # 'process-service-perfdata' command definition define command{ command_name process-service-perfdata command_line /usr/bin/printf "%b" "$LASTSERVICECHECK$\t$HOSTNAME$\t$SERVICEDESC$\t$SERVICESTATE$\t$SERVICEATTEMPT$\t$SERVICESTATETYPE$\t$SERVICEEXECUTIONTIME$\t$SERVICELATENCY$\t$SERVICEOUTPUT$\t$SERVICEPERFDATA$\n" >> /usr/local/nagios/var/service-perfdata.out } contacts.cfg: define contact{ contact_name supporto alias Supporto Clienti DEA service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r host_notification_options d,r service_notification_commands notify-service-by-email host_notification_commands notify-host-by-email email [email protected] } define contactgroup{ contactgroup_name admins alias Nagios Administrators members supporto } nagios.log: [1401871412] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401871953] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 541 seconds ago [1401872133] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago [1401872321] SERVICE ALERT: posta;Swap Usage;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872322] SERVICE ALERT: fileserver;Current Users;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872420] SERVICE ALERT: archivio;Disk Space;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872492] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401872492] SERVICE ALERT: posta;Swap Usage;OK;SOFT;2;SWAP OK: 100% free (1984 MB out of 1984 MB) [1401872590] SERVICE ALERT: archivio;Disk Space;OK;SOFT;2;DISK OK [1401872931] Auto-save of retention data completed successfully. [1401873333] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 402 seconds ago [1401873513] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago mail.log (i think that the problem is here but i don't know how to resolve it): Jun 4 10:00:01 backups sm-msp-queue[6109]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:01:01 backups sm-msp-queue[6109]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:20:01 backups sm-msp-queue[7247]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:21:01 backups sm-msp-queue[7247]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:40:01 backups sm-msp-queue[8327]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:41:01 backups sm-msp-queue[8327]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:00:01 backups sm-msp-queue[9549]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:01:01 backups sm-msp-queue[9549]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:20:01 backups sm-msp-queue[10678]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:21:01 backups sm-msp-queue[10678]: unable to qualify my own domain name (backups) -- using short name i'm at the last step and i want to finish this Nagios Core! :) Any help be appreciate!:) host definition (this host have the disk almost full and it is in hard state but non notification) : define host{ use generic-host ; Name of host template to use host_name posta alias Server Posta ESA address 10.10.2.102 parents xen1, xen2 icon_image redhat.png statusmap_image redhat.gd2 } service definition: define service{ use generic-service host_name xen1, maestro, xen2, posta, nas002, serv2, esasrvmi02, esaubuntumi service_description Disk Space check_command ssh_all_disks!10%!5% } Notification is allowed for the contact definition you gave, but is it also allowed at the the service level ? sorry but i don't understand this thing! :(

    Read the article

  • Installing Mercurial on Mac OS X 10.6 Snow Leopard

    - by Matthew Rankin
    Installing Mercurial on Mac OS X 10.6 Snow Leopard I installed Mercurial 1.3.1 on Mac OS X 10.6 Snow Leopard from source using the following: cd ~/src curl -O http://mercurial.selenic.com/release/mercurial-1.3.1.tar.gz tar xzvf mercurial-1.3.1.tar.gz cd mercurial-1.3.1 make ALL sudo make install This installs the site-packages files for Mercurial in /usr/local/lib/python2.6/site-packages/. I know that installing Mercurial from the Mac Disk Image will install the files into /Library/Python/2.6/site-packages/, which is the site-packages directory for the Mac OS X default Python install. I have Python 2.6.2+ installed as a Framework with its site-packages directory in: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages With Mercurial installed this way, I have to issue: PYTHONPATH=/usr/local/lib/python2.6/site-packages:"${PYTHONPATH}" in order to get Mercurial to work. Questions How can I install Mercurial from source with the site-packages in a different directory? Is there an advantage or disadvantage to having the site-packages in the current location? Would it be better in one of the Python site-package directories that already exist? Do I need to be concerned about virtualenv working correctly since I have modified PYTHONPATH (or any other conflicts for that matter)? Reasons for Installing from Source Dan Benjamin of Hivelogic provides the benefits of and instructions for installing Mercurial from source in his article Installing Mercurial on Snow Leopard.

    Read the article

  • Contents farms, scrapers sites, aggregators real world examples? [closed]

    - by Marco Demaio
    Contents farm, scrappers, aggregators real world examples? Could you plz clarify me: efreedom.com is a scraper site, not a content farm? Because it simply copies and pastes contents from stackoverflow. ehow.com and squidoo.com are contents farm? They don't copy and paste contents they just generate fresh new user generated content, but too much and too quickly. expert-exchange.com is NOT a content farm or a scraper site, right?! It's simply that many people (an me too) hates it (they also wrote to Matt Cutts) because it shows up hight in Google providing a useless question with no answer. There are also many sites that act as 'contents aggregators in the form of specialized directories' (let's call them CASD), I don't know how to else define them. Do they have a specific definition? Anyway are these type of CASD contents farms or scrapers sites or what else? Basically these CASD search for all sites of the same type i.e. “restaurants websites”, they copy and paste the contents found in “Restaurant A” and create in their aggregator site a new page called “Restaurant A”, then they do the same for all websites of the same type, thus creating a sort of directory of restaurants. Later on these CASD also sends an email to the owner of “Restaurant A” (usually the email is on the website) with a user and password to let him modify/update its own page on the CASD site. Later on these CASD might ask for money to the owner of “Restaurant A” because they bring him traffic, otherwise they remove its page on the aggregator. Someone could call these simply directories, but I think a directory is different because is something you need to add your site into by filling a form and not something that steals contents from your existing site without a specific acceptance from the site's owner. I also really wonder how Google will sort out all these mess sites packed of contents that show up more and more and everywhere in search results.

    Read the article

  • How to architect Rails site that can be edited while running?

    - by Chris Kimpton
    Hi, I am writing a Rails app that "scrapes/navigates" some other websites and webservices for content. I am using Mechanize and Savon to do the heavylifting. But given the dynamic nature of the web, I'd like to make my calls to these editable by the admin users of the site - rather than requiring me to release a new version of the site. The actual scraping thread happens async to the website, using the daemons gem. My requirements are: Thinking that the scraping/webservice calling code is quite simple, the easiest route is to make the whole class editable by the admins. Keep a history of the scraping code - so that we can fairly easily revert if we introduce a problem. Initially use the code from the file system, but as soon as thats been edited and stored somewhere, to use that code instead. I am thinking my options are: Store the code in the db (with a history table for the old versions) Store the code in a private git repo somewhere and access that for the history/latest versions. I am thinking the git route might be easiest, given its raison d'etre is to track file history... But perhaps there is a gem/plugin that does all this for me, out of the box? Thanks in advance for any tips/advice. ~chris

    Read the article

  • Do any CDN services offer multiple urls (or aliases) for your files?

    - by Jakobud
    Lets say a company has multiple commercial web properties that happen to use a lot of the same images on each site. For SEO reasons, the sites must not appear to be related to eachother in any way. This means that the sites can't all link to the same image, even though they all use the same one. Therefore, an image is uploaded to each site and served from each site separately. In order to improve maintainability and latency, lets say the company wanted to use a CDN service. What I'm wondering is, if you upload a file, like an image or something, to a CDN, is there basically one single URL that you access that image at? Or do some (or all) CDN services offer alias URLs so that you can access the same resource from multiple URLs? Example of undesirable situation: Both sites link to the same file URL Site ABC links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Site XYZ links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Example of DESIRABLE situation: Both sites link to the same file via different URLs Site ABC links to <img src="http://123.cdnservice.com/some-path/myimage.jpg"/> Site XYZ links to <img src="http://123.cdnservice.com/some-alias-path/myimage.jpg"/> So in the end, there is only one single file, myimage.jpg on the CDN server, but it is accessible from multiple URLs. Is this possible with CDN services? I know this would make browsers cache the same image twice, but at least it would be better for maintainability. Only one file would ever have to be uploaded.

    Read the article

  • IIS6 won't respond to a request for a JS file after accessing through subdomain

    - by James
    I have a site running of www.mysite.com for example. There is a JS file I'm accessing: www.mysite.com/packages.js The first and subsequent times that I acccess that packages.js file causes no problems........until I access a sub-site like this: sub-site.mysite.com This naturally makes a request for that same packages.js....but the site hangs as it just keeps waiting and waiting for that JS file. Going back to the main site, the problem perists there. If I then rename packages.js to say packages2.js it then works in the same way. I can access the file on the main site but after I try and access it through a sub-site IIS then fails to respond to a request for that file. I realise this explanation is a little vague, but has anyone seen this sort of behaviour before? Thanks very much, James.

    Read the article

  • cannot import name formats

    - by cadthecoder
    what does it mean? i ve googled but found nothing =/ ImportError at /admin/ cannot import name formats Request Method: GET Request URL: http://127.0.0.1:8000/admin/ Exception Type: ImportError Exception Value: cannot import name formats Exception Location: /usr/lib/python2.6/site-packages/django/contrib/admin/util.py in <module>, line 3 Python Executable: /usr/bin/python Python Version: 2.6.0 Python Path: ['/home/cad/project/lkd/gezegen/lkd_gezegen', '/usr/lib64/python26.zip', '/usr/lib64/python2.6', '/usr/lib64/python2.6/plat-linux2', '/usr/lib64/python2.6/lib-tk', '/usr/lib64/python2.6/lib-old', '/usr/lib64/python2.6/lib-dynload', '/usr/lib64/python2.6/site-packages', '/usr/lib64/python2.6/site-packages/Numeric', '/usr/lib64/python2.6/site-packages/PIL', '/usr/lib64/python2.6/site-packages/gst-0.10', '/usr/lib64/python2.6/site-packages/gtk-2.0', '/usr/lib64/python2.6/site-packages/wx-2.8-gtk2-unicode', '/usr/lib/python2.6/site-packages']

    Read the article

  • How to use the Request URL/URL Rewriting For Localization in ASP.NET - Using an HTTP Module or Globa

    - by LocalizedUrlDMan
    I wanted to see if there is a way to use the request URL/URL rewriting to set the language a page is rendered in by examining a portion of the URL in ASP.NET. We have a site that already works with ASP.NET’s resource localization and user’s can change the language that they see pages/resources on the site in, however the current mechanism in not very search engine friendly since the language variations for each language all appear as one page. It would be much better if we could have pages like www.site.com/en-mx/realfolder/realpage.aspx that allow linking to culture specific versions of a page. I know lots of people have likely done localization through URL structures before and I wanted to know if one of your could share how to do this in the Global.asax file or with an HTTP Module (pointing to links to blog postings would be great too). We have a restriction that the site is based on ASP.NET 2.0 (so we can't used the 3.5+ features yet). Here is the example scenario: A real page exits at: www.site.com/realfolder/realpage.aspx The page has a mechanism for the user to change the language it is displayed in via a dropdown. There are search engine optimization and user links sharing benefits to doing this since people can link directly to a page that has content that is applicable to a certain language (this could also include right-to-left layouts for languages like Japanese). I would like to use an HTTP module to see if the first part of the URL after www.site.com, site.com, subdomain.site.com, etc. contains a valid culture code (e.g. en-us, es-mx) then use that value to set the localization culture of the page/resources based on that URL. So if the user accesses the URL www.site.com/en-MX/realfolder/realpage.aspx Then the page will render in Mexico’s variant of Spanish. If the user goes to www.site.com/realfolder/realpage.aspx directly the page would just use their browser’s language settings.

    Read the article

  • Spotlight on mkyong

    - by MarkH
    Occasionally, I'd like to share a blog I've discovered or that someone has passed along to me. Criteria are few, but in a nutshell, it must be: Java-related. (Doh!) Interesting. A good blog is exciting to read at some level, whether due to perspective, eye-catching writing, or technical insight. It doesn't have to read like a Stephen King novel, but it should grab you somehow. Technically deep or technically broad. A site that dives deeply, quickly is a great reference for particular topics/tasks. On the other hand, one that covers a lot of ground at a high-but-still-technical level can be a handy site to visit occasionally as well. Both are what I consider "bookmarkable", but for different reasons. Drumroll, please... With that in mind, this Blog Spotlight is cast upon mkyong.com, a site I stumbled across that offers a little bit of everything for various Java dev audiences. The title indicates the site is for "Java web development tutorials", and indeed it does have these: JSF, Spring, Struts, Hibernate, JAX-WS, JAX-RS, and numerous other topics are addressed to varying degrees. The site isn't devoted exclusively to server-side tutorials, though. Recent posts include mobile development topics, and the links at the bottom of the page connect you to reference pages and other useful sites. I've poked around through a couple of the tutorials and, while they won't take you from "zero to hero", they do seem to provide a nice overview of the subject at hand. They also offer an occasional explanatory comment that is missing from far too many texts, sites, and doc pages. It's not a perfect site, but I like it. The Bottom Line mkyong.com offers a nice "summary site" of server-side tutorials, mobile dev posts, and reference links. Check it out! All the best,Mark 

    Read the article

  • Order Multidimensional Arrays PHP

    - by ronsandova
    Hi everyone I have some problem to order an array by a field of this, here i leave the example foreach($xml as $site){ echo '<div><a href="'.$site->loc.'">'.$site->loc.'</a>' .$site->padre.'</div>'; } Some times the filed $site->padre is empty but i'd like to order by $site->padre alphabetical i saw example with usort but i don't understand how to work it. Thanks in advance. Cheers

    Read the article

  • Apache rewriteBase in .htaccess for development subdomains

    - by Das123
    I think I'm missing something and don't think I really understand how rewriteBase works. The problem I have is that I have a production site where the site is in the root directory yet I have my development site in a localhost subdirectory. Eg http://www.sitename.com/ vs http://localhost/sitename/ If I have for example an images folder I want to reference the images from the site root by using the initial slash in the href. Eg Using a relative href (without the initial slash) is not an option. This will work on the production site but the development site is looking for http://localhost/images/imagename.jpg instead of http://localhost/sitename/images/imagename.jpg So I thought all I needed to do was setup the following in my .htaccess file to force the site root to my subdomain within the development environment: Options +FollowSymLinks RewriteEngine On RewriteBase /sitename But this still uses localhost as the site root instead of localhost/sitename. Can anyone please give me some pointers?

    Read the article

  • Low pagerank backlinks - does Google penalize?

    - by Programmer Joe
    I have a new stock discussion forum and I would like to promote it. Specifically, I have two ideas in mind to help promote it: 1) Become a member at other stock discussion forums. Make high quality posts, build a good reputation, and leave a link to my own forum in a non intrusive way (ie. in signature or at the end of my posts). This approach makes sense because you can find other members in other forums that are interested in stock discussion and a backlink to your forum, as long as it is not done in an intrusive/spammy way, should come across as acceptable. 2) Promote my site by writing articles at Squidoo, Hubpages, etc. This approach also makes sense because that's what Squidoo and Hubpages is for. The problem with both these approaches is that when I leave a backlink to my site, the page that I am leaving a backlink from may have a low PR - most likely, a PR of 0. Now, I have read that after the Penguin update by Google, your site can be penalized if you have too many backlinks from low PR pages: http://www.entrepreneur.com/article/224339 So, I am caught in a dilemma: a) If I start promoting my site via other stock forums, Squidoo, Hubpages, etc, but the backlink to my site comes from a page with low PR, Google may penalize my site. b) However, if I don't promote my site, nobody will ever discover it (aside from other promotion techniques like social media promotion, directories, etc).

    Read the article

  • How to update Adobe's software unattendedly?

    - by jubel
    I would like to use unattended-upgrade to update the Adobe Reader, Flash Player and everything else of the Canonical partners. There fore, I added in /etc/apt/apt.conf.d/50unattended-upgrades Unattended-Upgrade::Allowed-Origins { "${distro_id} ${distro_codename}-security"; "${distro_id} ${distro_codename}-updates"; "Canonical ${distro_codename}"; // "${distro_id} ${distro_codename}-proposed"; // "${distro_id} ${distro_codename}-backports"; }; sudo unattended-upgrade --dry-run -d says Initial blacklisted packages: Starting unattended upgrades script Allowed origins are: ['o=Ubuntu,a=oneiric-security', 'o=Ubuntu,a=oneiric-updates', 'o=Canonical,a=oneiric'] Checking: acroread-common (["<Origin component:'partner' archive:'' origin:'' label:'' site:'archive.canonical.com' isTrusted:False>"]) Checking: adobe-flash-properties-gtk (["<Origin component:'partner' archive:'' origin:'' label:'' site:'archive.canonical.com' isTrusted:False>"]) Checking: adobe-flashplugin (["<Origin component:'partner' archive:'' origin:'' label:'' site:'archive.canonical.com' isTrusted:False>"]) Checking: adobereader-deu (["<Origin component:'partner' archive:'' origin:'' label:'' site:'archive.canonical.com' isTrusted:False>"]) Checking: handbrake-cli (["<Origin component:'main' archive:'oneiric' origin:'LP-PPA-stebbins-handbrake-snapshots' label:'HandBrake Snapshots' site:'ppa.launchpad.net' isTrusted:True>"]) Checking: handbrake-gtk (["<Origin component:'main' archive:'oneiric' origin:'LP-PPA-stebbins-handbrake-snapshots' label:'HandBrake Snapshots' site:'ppa.launchpad.net' isTrusted:True>"]) Checking: sopcast-player (["<Origin component:'main' archive:'oneiric' origin:'LP-PPA-ferramroberto-sopcast' label:'LffL Sopcast' site:'ppa.launchpad.net' isTrusted:True>"]) pkgs that look like they should be upgraded: Fetched 0 B in 0s (0 B/s) blacklist: [] InstCount=0 DelCount=0 BrokenCout=0 No packages found that can be upgraded unattended And it won't update. How can I update the third-party software automatically?

    Read the article

  • Big Oh notation does not mention constant value

    - by user883561
    I am a programmer and have just started reading Algorithms. I am not completely convinced with the notations namely Bog Oh, Big Omega and Big Theta. The reason is by definition of Big Oh, it states that there should be a function g(x) such that it is always greater than or equal to f(x). Or f(x) <= c.n for all values of n n0. My doubt is the why dont we mention the constant value in the definition? For example. lets say a function 6n+4, we denote it as O(n). but its not true that the definition holds good for all constant value. this holds good only when c = 10 and n = 1. For lesser values of c than 6, the value of n0 increases. So why we do not mention the constant value as a part of the definition.

    Read the article

  • How to overcome politics of the net (Google translate code refuses to work from a specific region)

    - by Jawad
    According to the FAQ's I am not sure if my question is a ok to ask or will be closed or should I post it in the meta or even I would blame some one for downvoting it. However it is one that has been bugging me since the trouble strated. Let me explain. I have this Web Site. It uses the Google Translate API (Can't post the link, does not open from this region) with the following code. <meta name="google-translate-customization" content="9f841e7780177523-3214ceb76f765f38-gc38c6fe6f9d06436-c"></meta> <script type="text/javascript"> function googleTranslateElementInit() { new google.translate.TranslateElement({pageLanguage: 'en'}, 'google_translate_element'); } </script> <script type="text/javascript" src="http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"></script> The problem is since this, it just stopped working. On the site you can see that I had to actually remove the above from here, here, and here while left it here, here, here and here. This is so because the the web site "refuses" to load at all with the pages that have the code (i.e., from this region.) If I use Firefox Stealthy Plugin and open the site in Firefox, It works like a charm without any problems. But with Google Chrome, Apple Safari and Opera Web browser, the site does not load/open at all because of the Google translate. (I know this because If I remove the Google Translate Code, the site works/loads fine) It was one thing to program for "cross browser compatability" and alltogether another to program for "cross region compatability". What can I do to make sure that the site works from anywhere? Do I completely remove the Google Translate code and just have to do without the additional functionality or Do I look for alternatives like this or according to this?

    Read the article

  • How do I use IIS6 style metabase paths in IIS7 AppCmd tool?

    - by Mike Atlas
    I'm currently in the process of upgrading old II6 automation scripts that use the IISVdir tool to create/modify/update apps and virtual directories, and replacing them with AppCmd for IIS7. The IIS6, "IISVDir" commands reference paths in that are from the metabase, eg, "/W3SVC/1/ROOT/MyApp" - where 1 is ID of the "Default Web Site" site. The command doesn't actually require the display name of the site to make changes to it. This works well, since on a different language OS, the "Default Web Site" site name could be named, for example, "??? Web ???" or anything else for that matter. But this flexibility is lost if AppCmd can only reference "Default Web Site" via its name, and not a language-neutral identifier. So, how can I script AppCmd to refer to sites, vdirs and apps using language neutral identifiers to reference the "Default App Site"? Perhaps I need to start creating my own site instead, from the start, and name it something else specific, and stop using "Default Web Site" as the root? (Disclosure: I only have a IIS7-English machine that I am working on currently, but I have both IIS6-English and IIS6-Japanese machines for testing my old scripts - so perhaps it really is just "Default Web Site" still on Win2k8-Japanese?)

    Read the article

  • How to overcome politics of the net (Google translate code refuses to work from a specific region) [closed]

    - by Jawad
    Possible Duplicate: How to overcome politics of the net (Google translate code refuses to work from a specific region) I have this Web Site. It uses the Google Translate API (Can't post the link, does not open from this region) with the following code. <meta name="google-translate-customization" content="9f841e7780177523-3214ceb76f765f38-gc38c6fe6f9d06436-c"></meta> <script type="text/javascript"> function googleTranslateElementInit() { new google.translate.TranslateElement({pageLanguage: 'en'}, 'google_translate_element'); } </script> <script type="text/javascript" src="http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"></script> The problem is since this, it just stopped working. On the site you can see that I had to actually remove the above from here, here, and here while left it here, here, here and here. This is so because the the web site "refuses" to load at all with the pages that have the code (i.e., from this region.) If I use Firefox Stealthy Plugin and open the site in Firefox, It works like a charm without any problems. But with Google Chrome, Apple Safari and Opera Web browser, the site does not load/open at all because of the Google translate. (I know this because If I remove the Google Translate Code, the site works/loads fine) It was one thing to program for "cross browser compatability" and alltogether another to program for "cross region compatability". What can I do to make sure that the site works from anywhere? Do I completely remove the Google Translate code and just have to do without the additional functionality or Do I look for alternatives like this or according to this?

    Read the article

  • Why would certain browsers request all pages on my ASP.Net Web site twice?

    - by Deane
    Firefox is issuing duplicate requests to my ASP.Net web site. It will request a page, get the response, then immediately issue the same request again (well, almost the same -- see below). This happens on every page of this particular Web site (but not any others). IE does not do this, but Chrome also does this. I have confirmed that there is no Location header in the response, and no Javascript or meta tag in the page which would cause the page to be re-requested (if any of these were true, IE would be re-requesting pages as well). I have confirmed this behavior on multiple Firefox installs on multiple machines. Versions vary, but all are 3.x. The only difference between the two requests is the Accepts header. For the first request, it looks like this: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 For the second request, it looks like this: Accept: */* The Content-Type response header in all cases is: Content-Type: text/html; charset=utf-8 Something else odd -- even though Firefox requests the page twice, it uses the first response and discards the second. I put a counter on a page that increments with every request. I can watch the responses come back (via the Charles proxy). Firefox will get a "1" the first time, and a "2" the second time. Yet it will display the "1," for some reason. Chrome exhibits this exact same behavior. I suspect it's a protocol-level issue, given the difference in Accepts header, but I've never seen this before.

    Read the article

  • Feature activate via UI but does not show in the Libraries and Lists

    - by Justin Cullen
    I am really hating SharePoint as there are hardly any good/concrete documentation. I developed custom List "MainCatalog" with few columns (not site columns). Create features and elements with MOSS feature builder at Site collection level so scope="site" installed via stsadm activated via UI "went to site collection website", Site Setting Site collection Feature (and saw my custom list "MainCatalog") and was able to activate. then went to "mySiteCollection Site Settings Site Libraries and Lists " My list is showing But it shows in the "mySiteCollection Create Custom Lists "MainCatalog" I guess it's showing there as a template... But my intention is to deploy this list from development to test environment. EXTREMELY STRESSED. I AM ON THIS FOR LAST 8 DAYS.....

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >