Search Results

Search found 5137 results on 206 pages for 'i like traffic lights'.

Page 118/206 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Edit 100MB+ file

    - by Majid Fouladpour
    I have captured some traffic with Wireshark and saved the result as a file. The file has 3 sections now: request headers response headers response body The response body is to become an flv file, but now everything is saved as a single file. So I need a way to delete the first two sections from the file, but the problem is that the file is very big (over a thousand mega bytes). I have tried to open it with gedit, but no matter how long I wait, gedit hangs and remains unresponsive until I kill it. What tool can I use to edit this big file easily?

    Read the article

  • Jquery background overlay/alert without .onclick event - php responder?

    - by Philip
    Hi guys, I have no experience with jquery or javascript for that matter. I am trying to implement this technique to respond to users for errors or messages in general. lights-out-dimmingcovering-background-content-with-jquery This method uses the onclick event and that's not what im after, I have tried to replace .onclick with .load but that doesn't seem to work. I'm after a quick fix as I really don't have the time to learn jquery or its event handlers. The goal is to catch any errors or message's and once these are called the alert box is called without any further actions such as .onclick. How my code would look: {PHP} $forms = new forms(); if(count($forms->showErrors) > 0 // or == true) { foreach($forms->showErrors as $error) { print('<p class="alert">'.htmlspecialchars($error, ENT_QUOTES).'</p>'); } }

    Read the article

  • How to fix Google 404 not found Crawl Errors?

    - by Freeme
    I was checking on Google webmater tool for my blog site to see if there's any indication on why my blog traffic decreased to half in one day and i saw 43 Not Found crawl errors and 5 in Sitemap Not Found errors. The 5 Not Found errors in Sitemap were the links to categories. I guess I renamed categories that's why google can't find the links. As for the 43 other Not Found errors, I see blog post titles that contains (' .) EX: McDonald's, O.N.E. They weren't found by google crawler. Blog post with /CachedYou at the end and blog posts with /www.example.com attached at the end, they weren't found by Google crawlers either. My question is how do I correct those Not Found Errors? Thanks

    Read the article

  • How to include high resolution version of thumbnail

    - by neak
    I'm running a tube site that has a thumbnail for each post/video (running via Wordpress, 500k posts). When I first created the thumbnails they were a fairly large size, around 640px in width each and because of that I was seeing a lot of traffic from Google Images. After streamlining the site and resizing all of the thumbnails down to 170px rather than scaling them down I'm worried that Google isn't going to rank the images as high as they would be at a larger resolution, so is there a way to include the higher res versions and serve them to be indexed instead of the smaller ones?

    Read the article

  • How to debug a server that crashes once in a few days?

    - by Nir
    One of my servers crashes once in a few days. It does low traffic static web serving + low trafic dynamic web serving (PHP, local MYSQL with small data, APC, MEMCACHE) + some background jobs like XML file processing. The only clue I have is that a few hours before the server dies it starts swapping (see screenshot http://awesomescreenshot.com/075xmd24 ) The server has a lot of free memory. Server details: Ubuntu 11.10 oneiric i386 scalarizr (0.7.185) python 2.7.2, chef 0.10.8, mysql 5.1.58, apache 2.2.20, php 5.3.6, memcached 1.4.7 Amazon EC2 (us-west-1) How can I detect the reason for the server crashes ? When it crashes its no longer accessible from the outside world.

    Read the article

  • Collision Detection fails with AI cars

    - by amit.r007
    I am making a car parking game in flash and AS3 wherein I drive my car along with other AI traffic cars moving along a specified path using Guidelines. I am using CDK for collision detection. The collision detection works fine with few AI cars, but doesn't seems to be working as required for few AI cars. When an AI car is moving on a path in a straight line it works fine.... but when the AI Car turns at 90 degress..... my car goes into the AI car (Overlapping) and it hits at the center of that AI car and then collision is Detected.... ..... I made a New path and used a new Sprite for AI car... but still the problem pursues....

    Read the article

  • Possible automated Bing Ads fraud?

    - by Gary Joynes
    I run a website that generates life insurance leads. The site is very simple a) there is a form for capturing the user's details, life insurance requirements etc b) A quote comparison feature We drive traffic to our site using conventional Google Adwords and Bing Ads campaigns. Since the 6th January we have received 30-40 dodgy leads which have the following in common: All created between 2 and 8 AM Phone number always in the format "123 1234 1234' Name, Date Of Birth, Policy details, Address all seem valid and are unique across the leads Email addresses from "disposable" email accounts including dodgit.com, mailinator.com, trashymail.com, pookmail.com Some leads come from the customer form, some via the quote comparison feature All come from different IP addresses We get the keyword information passed through from the URLs All look to be coming from Bing Ads All come from Internet Explorer v7 and v8 The consistency of the data and the random IP addresses seem to suggest an automated approach but I'm not sure of the intent. We can handle identifying these leads within our database but is there anyway of stopping this at the Ad level i.e. before the click through.

    Read the article

  • How can I distribute a unique database already in production?

    - by JVerstry
    Let's assume a successful web Spring application running on a MySQL or PostgreSQL database. The traffic is becoming so high and the amount of data is becoming so big that a distributed database solution needs to be implemented to address scalability issue. Let's also assume this application is using Hibernate and the data access layer is cleanly separated with DAOs. Ideally, one should be able to add or remove databases easily. A failback solution is welcome too. What would be the best strategy to scale this database? Is it possible to minimize sharding code (Shard) in the application?

    Read the article

  • How do you off load work from the database?

    - by TheLQ
    In at least the web development field, web servers are everywhere but are backed by very few database servers. As web servers get hit with requests they execute large queries on the database, putting the server under heavy load. While web servers are very easy to scale, db servers are much harder (at least from what I know), making them a precious resource. One way I've heard from some answers here to take load off the database is to offload the work to the web server. Easy, until you realize that now you've got a ton of traffic internally as a web server tries to run SELECT TOP 3000 (presumably to crunch the results on its own), which still slows things down. What are other ways to take load off the database?

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Tracking referrals between profiles on the same domain in Google Analytics

    - by doctororange
    I have a website at mydomain.com that uses Analytics. I have a blog that resides at mydomain.com/blog/, which also uses Analytics They are on different profiles. The main site uses something like: _gaq.push(['_setAccount', 'UA-XXXXXXXX-6']); While the blog uses: _gaq.push(['_setAccount', 'UA-XXXXXXXX-7']); _gaq.push(['_setCookiePath', '/blog/']); My issues is that this seems not to track referrals from the blog through to the main site when, for instance, the logo which links to the main site is clicked. Ideally, I would like the clicks of this logo to report that the source was mydomain.com/blog/, but because they are at the same domain they seem to register as direct traffic. Have I missed a step in my configuration, or will I have to resort to linking to something like mydomain.com?ref=blog? Thank you.

    Read the article

  • URL Rewrite http to https EXCEPT files in a specific subfolder

    - by BrettRobi
    I am trying to force all traffic on my web site to use HTTPS, using the URL Rewrite 2.0 module added to IIS 7.5. I got that working and now have a need to exclude a couple of pages from using SSL. So I need a rule to rewrite all URL except those referencing this folder to HTTPS. I've been banging my head against the wall on this and am hoping someone can help. I tried creating a rule to match all URL except those in a nossl subfolder as in this example: <rule name="HTTP to HTTPS redirect" enabled="true" stopProcessing="true"> <match url="(/nossl/.*)" negate="true" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Found" /> </rule> But this doesn't work. Can anyone help?

    Read the article

  • Confused about nova-network

    - by neo0
    I'm so sorry because this question doesn't related to Ubuntu. I asked in Openstack forum but this forum is not very active. So I think if someone have experience with Openstack Nova can help me with my problem. I've read some explanations about nova-network and how to configure it like this one from wiki: http://wiki.openstack.org/UnderstandingFlatNetworking I'm confusing about a detail. If every traffic from the instances must go through nova controller node, then why we still need the public interface for nova-compute node? Is it necessary? What happen when a request from outside to an instance. For example I have a controller node and a nova-compute node. In nova-compute node I run an instance with a Wordpress website. Then someone connect to the public IP of this instance. So the request go directly from router to the nova-compute node or from router to controller node then nova-compute node? Thank you!

    Read the article

  • How to structure my AdWords campaign for testing and different groups of keywords?

    - by Romain Dorange
    I am starting an AdWords campaigns and I will measure conversion rates using the AdWords conversion tracking pixel. Conversion might be account creation or a concrete sale. As it will be a test campaign to have some insights on CTR, CR, etc... on the future, I am likely to try several configurations: Two different ads with different landing URL and messages: one with a focus on the product / the other will contains a discount embedded in the URL. 4 different groups or themes of keywords. I guess I have to build 4 ads groups based on the keywords 2 ads with the different messages assign the two ads to each ads groups follow the campaign precisely in the ads tabs where I can see the effectiveness of each Ads per Ads Groups (for a total of 8 lines of reporting) Also, what are the key performance indicators that I can have from an AdWords campaign to measure global effectiveness? measure of return on investment from concrete sales (tracking pixel with e-commerce tag on confirmation page) measure o return on investment from leads acquisition (tracking pixel on account creation) measure of traffic increase with the campaign

    Read the article

  • Hosting advice for a write-heavy dynamic website

    - by Rahul Rawat
    I have built a website using PHP and MySQL and now I am looking for a hosting service. I am expecting about a 1000 users registering and about 5-10k pageviews/day in a week's time. So which host should I opt for? It will let users submit contents of type blobs and submit around 10 pictures per users. I hope that traffic will increase so can justhost's or bluehost's shared hosting serve that purpose or should I go for more dedicated ones. Basically the site is write heavy and there are average 2-3 MySQL queries per page and it is quite dynamic. So depending on these requirements which web hosting will be optimal for me.

    Read the article

  • How to distribute a unique database already in production?

    - by JVerstry
    Let's assume a successful web spring application running on a MySql or PostGre kind of database. The traffic is becoming so high and the amount of data is becoming so big that a distributed dataase solution needs to be implemented. It is a scalability issue. Let's assume this application is using Hibernate and the data access layer is cleanly separated with DAO objects. What would be the best strategy to scale this database? Does anyone have hands on experience to share? Is it possible to minimize sharding code (Shard) in the application? Ideally, one should be able to add or remove databases easily. A failback solution is welcome too. I am not looking for you could go for sharding or you could go no sql kind of answers. I am looking for deeper answers from people with experience.

    Read the article

  • Do search engines rank internal redirects negatively?

    - by siverd
    A client is in the late stages (code complete) of a website redesign and unfortunately hasn't implemented 301 redirects to point high traffic pages to the new URL's. As I understand it our only option at this point is to create redirects within the CMS. Our CMS allows us to do this: www.mysite.com/category/current-page.html will redirect to www.mysite.com/new-category-name/new-page.html The site now uses custom logic on our 404 page to check this list of redirects and if one exists forwards the user to the new-page.html I understand that using 301 redirects would be the correct way to maintain our page rank but I think that would require a code change which isn't possible. Question How will search engines respond to this? Will they wait until the redirect happens and allow us to keep our page rank (authority, trust, etc) or will they see the 404 page and down-rank us? Worst case...will they make our new-page.html start from a rank of "0"? Thanks for your help.

    Read the article

  • All About Search Engine Position Optimization

    Search engine optimization or SEO is a method increasing the amount of traffic or hits to your website, which results in making your website rank high in search engine results. These results are produced whenever an individual types in a keyword or a set of keywords in a search query in search engines like Yahoo!, Google and the like. Being high on the list of search results matters a lot because it makes you more visible to the general public, especially to your target market. This differentiates you from your competitors who may rank low in the search results, or may not even appear in the results lists at all.

    Read the article

  • uneven illuminated images

    - by coul
    How to get rid of uneven illumination from images, that contain text data, usually printed but may be handwritten? It can have some spots of lights because the light reflected while making picture. I've seen the Halcon program's segment_characters function that is doing this work perfectly, but it is not open source. I wish to convert an image to the image that has a constant illumination at background and more dark colored regions of text. So that binarization will be easy and without noise. The text is assumed to be dark colored than it's background. Any ideas?

    Read the article

  • Why Google Analytics is displaying wrong landing pages?

    - by Salman
    I see all of my pages as Landing Pages in Google Analytics which cannot be true as I did not post those pages anywhere and I don't see any traffic hitting directly to that page. Also, I am using virtual page views on few buttons and I see those virtual pages as Landing pages too. For example, /click/request-a-quote 35000 views 35000 is too big a number to be ignored. Even if I ignore Virtual Pages Views, I see a lot of pages as Landing Pages that I am 100% sure that visitors ( atleast not so many users) are NOT hitting directly. Any advice, how to debug it? PS: I'm using the following code: var _gaq = _gaq || []; _gaq.push(['_setAccount', '<']); _gaq.push(['_setDomainName', 'none']); _gaq.push(['setLocalGifPath', '/images/_utm.gif']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview','account/phase1']);

    Read the article

  • Duplicate domain names - .net and .com Create separate pages, or redirect?

    - by guisasso
    In a SEO point of view: This website has a good amount of traffic for a local business, but also ships some merchandise. While the .net domain (registered first) is associated with the local busines (google places, maps, etc...) the .com domain only redirects to the .net domain. Is it good, bad or okay to create a different page for the domain .com for example, that would be pretty simple, but would link to the 10 different categories of products that this company sells? I know links are good, so there's that, but what else is good, or bad? Thanks in advance!

    Read the article

  • How long should I keep 301 redirecting pages from a deprecated domain?

    - by ElHaix
    I had an old domain that I have deprecated, but 301 redirected all results from it to my new site. The new site is now receiving a decent amount of traffic, but I don't know if it's 301 redirected from the old site, and doing a site:[old site] still shows several thousand pages indexed. Since all pages from the old site are 301 redirected, will they ever be removed from the index, as long as the old domain name is active? As a rule of thumb, somewhere I got 90 days for any significant site changes. When is it safe to burn the old domain?

    Read the article

  • over reporting in google analytics - social media stats

    - by colmcq
    I have a client and their traffic from social media reads thus: 80% from facebook 1% from Twitter This suggestsd they are not exploiting twitter at all and this was in my presentation but my boss took it out claiming twitter stats are under reported in google analytics. I can't substantiate this claim and wonder where she got this idea from. Can anyone shed light on this? Are my stats wrong and should I disregard these figures? but 80-1 seems like one hell of an under report! thanks c

    Read the article

  • Can using an apt proxy (d-i mirror/http/proxy string http://mymirror) affect the installation of a .deb?

    - by Randolph
    I have been doing Ubuntu deployment using a preseed.cfg. After becoming comfortable with the packages being installed it was time to reduce download time and internet traffic by creating a mirror. I ended up doing a "partial mirror" using apt-cacher-ng and preseeding it by adding d-i mirror/http/proxy string http://mymirror to the preseed.cfg. This is where things got strange. I have a few .debs that I run as part of preseed/late_command by wgetting them and installing them with dpkg -i. The packages were installing without issue until added the proxy. With the proxy they fail to install. So does the proxy affect installing .debs during preseeded installation?

    Read the article

  • GA tracking utm query params after hashbang

    - by hybrid9
    We currently use a hashbang for the portion of our site that generates dynamic content which can also be deep linked. Our analytics team wants to use utm params to track the referral traffic from social networks. We are using Universal Analytics (analytics.js) as well as GTM. Will GA pick up the query parameters after the hashbang or does it always have to go before? For example: example.com/#!/some/content?utm_source=foo&utm_campaign=bar example.com?utm_source=foo&utm_campaign=bar/#!/some/content In #1, I'm concerned that the utm params won't be recorded and in #2 the page will break or the url could be incorrectly written. How does GA pull in those parameters - location.search? regex? Can I get away with using either?

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >