Search Results

Search found 50980 results on 2040 pages for 'http compression'.

Page 434/2040 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • Removing surrounding noises from voice recording

    - by Peak Reconstruction Wavelength
    I have a wave file whose frequency spectrum looks like this. http://i.stack.imgur.com/2rRaS.png It contains audio, which I want to keep while removing the rest. The problem is that the surround noise changes, just those distinct voice patterns remain. I marked the voice patterns for clarity: http://i.stack.imgur.com/eLkBl.png What could an algorithm look like / a workflow in adobe audition look like that removes everything but the voice patterns? I think that the main characteristic is the line-shaped form over time. Loudness alone is not enough as the noise is loud aswell.

    Read the article

  • problem showing my website correctly in search engines

    - by dinbrca
    Hello guys, I have a website which i have indexed on google for example (like 15 days ago). some of my pages pass arguments like: http://www.bla.com/products.php?pro=bla&page=view suddently i saw that passing arguments like this isn't good for SEO purposes and started using htaccess rewrite. and changed the arguments to like this: http://www.bla.com/products/bla/*view*/ now my site on google still shows as i showed at link number 1 what should i do? i thought i should wait for the search engine to crawl my site again but nothing happened. thanks in advanced, Din

    Read the article

  • SharePoint Search Problem: The start address sps3://server cannot be crawled.

    - by Clara Oscura
    With this post, I'm going to start a series on problems I have encountered with SharePoint search. Error: The start address sps3://luapp105 cannot be crawled. Context: Application 'Search_Service_Application', Catalog 'Portal_Content' Details:  Access is denied. Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository. If the repository being crawled is a SharePoint repository, verify that the account you are using has "Full Read" permissions on the SharePoint Web Application being crawled.   (0x80041205) (Event ID: 14, Task Category: Gatherer) Solution: give appropriate permissions to User Profile Synchronisation Service http://social.technet.microsoft.com/Forums/en-US/sharepoint2010setup/thread/64cdf879-f01e-4595-bc52-15975fefd18d http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2010/03/29/how-to-set-up-people-search-in-sharepoint-2010.aspx

    Read the article

  • Setting proxy from terminal

    - by baltusaj
    I have tried changing my proxy settings in a terminal as: export HTTP_PROXY=http://10.1.3.1:8080 and export http_proxy=http://10.1.3.1:8080 but when I try to install a new package or update apt-get, apt-get starts displaying messages from which it seems it is trying to connect to a previously set proxy: sudo apt-get update 0% [Connecting to 10.1.2.2 (10.1.2.2)] [Connecting to 10.1.2.2 (10.1.2.2) I have tried setting the proxy via bashrc file but that din work either. As far as I remember 10.1.2.2 was set using GNOME GUI but I don't have access to the GUI right now so I am trying to set it from terminal.

    Read the article

  • What measures can be taken to make sure Google is aware of the existence of a newly created page?

    - by knorv
    Consider a website with a large number of pages. New pages are published regularly. When publishing a new page the website operator wants to get the newly created paged indexed in Google as soon as possible. The website operator wants to minimize the time spent between publication and indexing. Consider the site http://www.example.com/ with hundreds of thousands of pages. The page page http://www.example.com/something/important-page.html is created at say 12:00. I want to get important-page.html indexed as soon as possible after 12:00. Ideally within seconds or minutes. What options are available to try to get Google to index a specific newly created page as soon as possible?

    Read the article

  • Question about mod_rewrite rule for redirecting failing pages

    - by SimpleCoder
    I'm setting up a mod_rewrite rule that redirects failing pages to a custom Page Not Found page. This is with Wordpress. I'm using the guide here: http://httpd.apache.org/docs/2.2/rewrite/rewrite_guide_advanced.html#redirect404. My rule so far looks like this: RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.+) http://example.com/?page_id=254 [R] This works. It seems to be a combination of the first and second suggestion that worked, since the -U flag did nothing. My question is, out of curiosity why the following happens: When I change REQUEST_FILENAME to REQUEST_URI (as the second example suggests), the page loads, but none of the style sheets load. All of my formatting is gone, and this happens on every page. Can anyone think of why this might happen?

    Read the article

  • No sound in any web browser(s)

    - by shaneo
    Hello I recently tried to compile and update alsa from source via this guide http://www.stchman.com/alsa_update.html. Afterwards the sound in any web browser I open Firefox, Opera, Chrome, Chromium, Iron there is no sound on any pages. I went back through the script listed on the site and found where it had installed the drivers and deleted it and than re-installed alsa via synaptic. Though I still have no sound in my browser(s). All system sounds work as they are supposed to only web sounds don't work. Here is my alsabase.conf http://paste.ubuntu.com/1073135/ also a snapshot of alsamixer Any assistance would be greatly appreciated. Thank You and let me know if any more information is required.

    Read the article

  • Tales from the Coal Face - Reporting errors

    - by TATWORTH
    One of the questions that comes up frequently, is "Is it worthwhile to report errors?".Last weekend, after installing the latest StyleCop I loaded up my copy of Power Collections. I found that StyleCop was now correctly picking up a lot of missing "this." statements, however there were now a number of false positives. Anticipating the need to submit sample code, I cleaned the solution and zipped it up.I reported this at http://stylecop.codeplex.com/discussions/357319.  The stylecop administrator promoted this report to a work item (see http://stylecop.codeplex.com/workitem/7285) and I uploaded the previously prepared Zip file. The StyleCop team was able to locate the problem and it is "Fixed in upcoming 4.7.27".The conclusion:Report errors!  Prepare sample code illustrating the error.

    Read the article

  • How to prevent a 404 Error when creating a subdomain and using www to access it?

    - by Chris
    I have installed a multi-site installation of WordPress onto my domain. I then added the necessary code to the wp-config.php file and .htaccess as instructed by WordPress. I also installed a plugin called Quick Page/Post Redirect Plugin which allowed me to place a 301 redirect onto the main domain as I only want to use the sub domain and not the main domain. Then I also added the following line of code to the wp-config.php file to redirect the main domain define( 'NOBLOGREDIRECT', 'URL Redirect Address' ); The site works fine with a redirect on the main domain and my subdomain runs fine when you type in subdomain.example.com or http://subdomain.example.com. However when I enter www.subdomain.example.com or http://www.subdomain.example.com the following error message is returned: Not Found The requested URL / was not found on this server. Apache/2.4.9 (Unix) Server at www.subdomain.domain.com Port 80 Any help with this would be much appreciated.

    Read the article

  • I can not download anything

    - by Jason Machen
    I am very new to ubuntu but decided to wipe my windows 7 and install it. I can not download anything from the software center. This is the error message I get. I can use the web in all other ways including this site. What can I do? Thanks, Jason W:Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/main/source/Sources 404 Not Found [IP: 91.189.91.13 80] W:Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/restricted Plus about 20 other lines.

    Read the article

  • Read data from a folder in main domain folder (CPanel\WHM)

    - by Memphis Raines
    I have defined a host in my CPanel\WHM server and put all my websites under one host account. The host Main Domain is domain.com, and all other websites are Add-on Domains: domain.com --folder --domain1 --domain2 --domain3 ... The thing I need is that when calling domain.com in browser, the server read files from another folder. for example when call http://domain.com it shows us http://domain.com/folder BUT I don't mean a redirection, I want server do this in background without showing visitors the real path. I couldn't do this with Domain WildCard Redirection because it got error. How can I do this? With htaccess or ... ?

    Read the article

  • Figuring our complex REST queries for SharePoint

    - by Sahil Malik
    SharePoint 2010 Training: more information A little while ago, I showed the REST query for a relatively complex query. Some readers have emailed me about how to figure out further queries, and especially for complex insert/delete/update scenarios. Well it is quite easy to figure out almost any query for SharePoint REST API. Okay, this is not just about SharePoint – you can apply what you read here for any REST API interface supported by Microsoft, like WCF data services. But, sometimes when you have many columns, or complex update operations, or are working with weird providers, it is tough to figure out the specific HTTP request you need to craft, error free, using REST. Well fear not, there is hope. As an example, what I did is, I created a SharePoint site at http://sp2010.winsmarts.internal/sampledata with 3 lists in it - 1. Artists (with one Column, Title) Read full article ....

    Read the article

  • JDeveloper does not recognize existing subversion working directory

    - by Bob Webster
    Just a quick note about an issue where JDeveloper no longer recognized an existing subversion working directory. Symptom:  JDeveloper Versioning menu offers to Version an Application that is already versioned in svn. Cause: The repository url contained in the hidden .svn folders of the working directory is no longer valid. Solution: Determine the correct url for the Subversion repository and update the .svn working directory.Fix the url contained in the svn folders of the working directory using the svn switch command. Example:           In a shell change directory to the Application folder.           Run the svn info command to confirm the current settings.                $ svn info                   Path: .                   URL: http://192.168.1.128/repos/jdeveloperrepo/AsyncExamples/BPELCallAsync/trunk                   Repository Root: http://192.168.1.128/repos/jdeveloperrepo                   Repository UUID: 3dc5eb88-3001-0010-8d6e-fd6f73825647                   Revision: 145                   Node Kind: directory                   Schedule: normal                   Last Changed Rev: 145                   Last Changed Date: 2012-06-07 07:15:56 -0700 (Thu, 07 Jun 2012)            In this case, the IP address in the repository URL is incorrect,           the svn server is located at 192.168.56.1           Note: The IP Address currently set is displayed after the Project Name in the            Application Navigator.  See the screen snapshot above.            Run the svn switch command with the --relocate option            Provide as much of the urls as necessary to correctly rewrite the url from current to new.            For example,            to change the repository server address from 192.168.1.128   to   192.168.56.1                     $  svn switch --relocate  http://192.168.1.128   http://192.168.56.1  .                               (Note the trailing period in the above command)           When the url is correct, JDeveloper should recognize the Subversion Working Directory.

    Read the article

  • URL parameter names being changed by user agents

    - by Mike Deck
    In reviewing one of our site's web logs I'm seeing instances where we are returning a 404 to requests because we're expecting an id parameter to be sent, but instead we're seeing a di parameter. The resource in question is an image but which image file actually gets served is dependent on the id parameter. The expected url is something like http://images.mysite.com/photo.gif?id=123&width=200&height=300 What I'm seeing in the logs is requests for http://images.mysite.com/photo.gif?di=123&width=200&height=300 The only case where we are seeing this on the id parameter. It seems unlikely that this is due to a server side or JavaScript bug since it seems to be only effecting a small percentage of our traffic. We are seeing this across a wide variety of user agents (both mobile and desktop) and IPs. Has anyone else seen this? Is there a browser plugin or other software you're aware of that could be causing this, and if so is there a good way to work around the issue?

    Read the article

  • Best strategy to discover a web service in a local network?

    - by Ucodia
    I am currently doing some research for a project. The setup is simple, I have a computer running a service in my home network and any device connected to that same network should be able to discover the service automatically and use it. I have no specific technology requirement whether it is on the server or client side. The client knows about the service definition. Other than that I have no idea what strategy to use, what technology to look at or whether I should go for a SOAP or a HTTP based service. I think going HTTP with REST API is the best for targeting all devices but I am opened to any suggestions. Thanks.

    Read the article

  • Repeater vs. ListView

    - by MoezMousavi
    I do really hate repeater. I was more a GridView lover but after 3.5 be born, I prefer ListView.  The first problem with Repeater is paging. You will need to write code to handle paging. Second common problem is empty data template. Have a look at this:             if (rptMyRepeater.Items.Count < 1)             {                 if (e.Item.ItemType == ListItemType.Footer)                 {                     Label lblFooter = (Label)e.Item.FindControl("lblEmpty");                     lblFooter.Visible = true;                 }             }   I found the above code is usefull if you need to show something like "There is no record" is your data source has no records. Although the ListView has a template.   If you combine ListView with a DataPager, you will be in heaven as it is sorting the paging for you without writing code. (http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.datapager.aspx)     Note: You have got to bind ListView in PreRender, it doesn't work properly in PageLoad   More: http://www.4guysfromrolla.com/articles/061009-1.aspx

    Read the article

  • Is there a way I can sort traffic by page-type based upon URL structure in Google-Analytics or Google Webmaster Tools??

    - by Felix
    I have a local business directory site. I'm trying to segment my incoming traffic by page-type such that i can find out what percentage of traffic is going to zip code pages exclusively and what percentage is going to city/state level pages. I basically want to filter by URL structure to find out what percentage of total traffic zip code pages account for. The reason for doing this is to find out if Does Google Tag Manager help with this? Here are the two URL paths: http://www.example.com/ny/new-york/10011/ http://www.example.com/ny/new-york Thanks all!

    Read the article

  • Directory access control with Apache: do I need to use a specific .htaccess?

    - by Mirror51
    I have an Apache webserver, and in the Apache configuration, I have Alias /backups "/backups" <Directory "/backups"> AllowOverride None Options Indexes Order allow,deny Allow from all </Directory> I can access files via http://127.0.0.1/backups. The problem is everyone can access that. I have a web interface, e.g. http://localhost/adminm that is protected with htaccess and password. Now I don't want separate .htaccess and .htpasswd for /backups, and I don't want a second password prompt when a user clicks on /backups in the web interface. Is there any way to use same .htaccess and .htpasswd for the backups directory?

    Read the article

  • Stop Google Analytics from appending hostname?

    - by Nick Q.
    I've come across an Analytics profile that is appending the rest of a URL to the end of a page's path. For example when looking at the page that exists at http://example.com/page I would expect to see /page but instead it shows me /page/http://example.com/. The profile has no filters applied to it, and until July was reporting as expected (/page), in July the site in question switched hosts (and absolutely nothing else, so I'm not sure that's the problem). The analytics code on the site is the standard Google Async code with a domain set. All other profiles for the site show /page as expected. Any ideas as to how I can get the profile to function as expected?

    Read the article

  • using curl command to download file in parts from different interfaces and to run the commands simultaneously from a script

    - by jsjain007
    i wanted to download a file using curl command simultaneously in different parts using ip aliasing(virtual Ethernet ports) so what i did was pasted the commands in a text file and run but the problem as obvious since it is in a file the commands will be executed one by one so is there a way to run all those commands simultaneously. here is the command curl --interface eth0:0 --range 0,38010880 http://wdl.cache.ijinshan.com/wps/download/Linux/unstable/kingsoft-office_9.1.0.4244~a12p3_i386.deb -o kinsoft-office.part1 curl --interface eth0:1 --range 38010880 ,- http://wdl.cache.ijinshan.com/wps/download/Linux/unstable/kingsoft-office_9.1.0.4244~a12p3_i386.deb -o kinsoft-office.part2 cat kinsoft-office.part*>kinsoft-office can anyone help me to run these above 2 commands simultaneously from the script so as to increase download speed

    Read the article

  • My site disappeared from Google search, how long does it take to get back?

    - by Sweb Dizajn
    Due to damage by malicious code, Google wrote: Google Analytics web property: link has been removed from http://swebdizajn.com November 29, 2011 Your Webmaster Tools http://swebdizajn.com site is no longer linked to a Google Analytics web property. Possible reasons are: You are no longer the owner of the site in Google Analytics, and nobody else owns both the site and the property Another site owner removed the link. After that I restored to backup and then accepted the Google message to tell them that all is well. How long will I have to wait for my site to return to the position where I was?

    Read the article

  • Pidgin not present in 12.10 repositories, how do i get one?

    - by Ankit
    I want to install Pidgin on my 12.10 clean install system. When I go to the Software Center and try to install the client I get an error saying:- Not found There isn’t a software package called “pidgin” in your current software sources. Any ideas which repositories i need to import to get this done. ERROR:- Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/pidgin/pidgin-data_2.10.6-0ubuntu1_all.deb 404 Not Found [IP: 91.189.92.156 80] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/pidgin/pidgin_2.10.6-0ubuntu1_amd64.deb 404 Not Found [IP: 91.189.92.156 80]

    Read the article

  • Footer not showing in website depending on which item is loaded [on hold]

    - by samyb8
    I designed a website which is having an issue, but I checked the html tagging very well and cannot fix it. If you go to this item: http://www.tahara.es/store/headbands/11/Ivory-Turquoise-headband You will see the FOOTER display normally. However if you go to this other item: http://www.tahara.es/store/headscarves/15/Grey-and-ivory-with-stoned-flower-Headscarf The footer does not show. Any clue of what I am missing or adding? The footer DIV is like this: <div id="footer">

    Read the article

  • Maintenance plans love story

    - by Maria Zakourdaev
    There are about 200 QA and DEV SQL Servers out there.  There is a maintenance plan on many of them that performs a backup of all databases and removes the backup history files. First of all, I must admit that I’m no big fan of maintenance plans in particular or the SSIS packages in general.  In this specific case, if I ever need to change anything in the way backup is performed, such as the compression feature or perform some other change, I have to open each plan one by one. This is quite a pain. Therefore, I have decided to replace the maintenance plans with a stored procedure that will perform exactly the same thing.  Having such a procedure will allow me to open multiple server connections and just execute an ALTER PROCEDURE whenever I need to change anything in it. There is nothing like good ole T-SQL. The first challenge was to remove the unneeded maintenance plans. Of course, I didn’t want to do it server by server.  I found the procedure msdb.dbo.sp_maintplan_delete_plan, but it only has a parameter for the maintenance plan id and it has no other parameters, like plan name, which would have been much more useful. Now I needed to find the table that holds all maintenance plans on the server. You would think that it would be msdb.dbo.sysdbmaintplans but, unfortunately, regardless of the number of maintenance plans on the instance, it contains just one row.    After a while I found another table: msdb.dbo.sysmaintplan_subplans. It contains the plan id that I was looking for, in the plan_id column and well as the agent’s job id which is executing the plan’s package: That was all I needed and the rest turned out to be quite easy.  Here is a script that can be executed against hundreds of servers from a multi-server query window to drop the specific maintenance plans. DECLARE @PlanID uniqueidentifier   SELECT @PlanID = plan_id FROM msdb.dbo.sysmaintplan_subplans Where name like ‘BackupPlan%’   EXECUTE msdb.dbo.sp_maintplan_delete_plan @plan_id=@PlanID   The second step was to create a procedure that will perform  all of the old maintenance plan tasks: create a folder for each database, backup all databases on the server and clean up the old files. The script is below. Enjoy.   ALTER PROCEDURE BackupAllDatabases                                   @PrintMode BIT = 1 AS BEGIN          DECLARE @BackupLocation VARCHAR(500)        DECLARE @PurgeAferDays INT        DECLARE @PurgingDate VARCHAR(30)        DECLARE @SQLCmd  VARCHAR(MAX)        DECLARE @FileName  VARCHAR(100)               SET @PurgeAferDays = -14        SET @BackupLocation = '\\central_storage_servername\BACKUPS\'+@@servername               SET @PurgingDate = CONVERT(VARCHAR(19), DATEADD (dd,@PurgeAferDays,GETDATE()),126)               SET @FileName = '?_full_'+                      + REPLACE(CONVERT(VARCHAR(19), GETDATE(),126),':','-')                      +'.bak';          SET @SQLCmd = '               IF ''?'' <> ''tempdb'' BEGIN                      EXECUTE master.dbo.xp_create_subdir N'''+@BackupLocation+'\?\'' ;                        BACKUP DATABASE ? TO  DISK = N'''+@BackupLocation+'\?\'+@FileName+'''                      WITH NOFORMAT, NOINIT,  SKIP, REWIND, NOUNLOAD, COMPRESSION,  STATS = 10 ;                        EXECUTE master.dbo.xp_delete_file 0,N'''+@BackupLocation+'\?\'',N''bak'',N'''+@PurgingDate+''',1;               END'          IF @PrintMode = 1 BEGIN               PRINT @SQLCmd        END               EXEC sp_MSforeachdb @SQLCmd        END

    Read the article

  • Simple HTML5 Friendly Markup Sample

    - by Geertjan
    From a demo done by David Heffelfinger (who has a great Java EE 7 screencast series here), on HTML5 friendly markup. index.xhtml:  <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:jsf="http://xmlns.jcp.org/jsf"> <title>Data Entry Page</title> <body> <form method="POST" jsf:id='form'> <table> <tr> <td>Name:</td> <td><input jsf:id='name' type="text" jsf:value="${person.name}" /></td> </tr> <tr> <td>City</td> <th><input jsf:id='city' type="text" jsf:value="${person.city}"/></th> </tr> <tr> <td><input type="submit" value="Submit" jsf:action="confirmation" /></td> </tr> </table> </form> </body> </html> confirmation.xhtml: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Data Confirmation Page</title> </head> <body> <h1>#{person.name}</h1> from <h2>#{person.city}</h2> </body> </html> Person.java: package org.demo; import javax.enterprise.inject.Model; @Model public class Person { String name; String city; public String getName() { return name; } public void setName(String name) { this.name = name; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } }

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >