Search Results

Search found 6198 results on 248 pages for 'traffic filtering'.

Page 142/248 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Tracking referrals between profiles on the same domain in Google Analytics

    - by doctororange
    I have a website at mydomain.com that uses Analytics. I have a blog that resides at mydomain.com/blog/, which also uses Analytics They are on different profiles. The main site uses something like: _gaq.push(['_setAccount', 'UA-XXXXXXXX-6']); While the blog uses: _gaq.push(['_setAccount', 'UA-XXXXXXXX-7']); _gaq.push(['_setCookiePath', '/blog/']); My issues is that this seems not to track referrals from the blog through to the main site when, for instance, the logo which links to the main site is clicked. Ideally, I would like the clicks of this logo to report that the source was mydomain.com/blog/, but because they are at the same domain they seem to register as direct traffic. Have I missed a step in my configuration, or will I have to resort to linking to something like mydomain.com?ref=blog? Thank you.

    Read the article

  • How do you off load work from the database?

    - by TheLQ
    In at least the web development field, web servers are everywhere but are backed by very few database servers. As web servers get hit with requests they execute large queries on the database, putting the server under heavy load. While web servers are very easy to scale, db servers are much harder (at least from what I know), making them a precious resource. One way I've heard from some answers here to take load off the database is to offload the work to the web server. Easy, until you realize that now you've got a ton of traffic internally as a web server tries to run SELECT TOP 3000 (presumably to crunch the results on its own), which still slows things down. What are other ways to take load off the database?

    Read the article

  • A few tips on deploying Secure Enterprise Search with PeopleSoft

    - by Matthew Haavisto
    Oracle's Secure Enterprise Search is part of PeopleSoft now.  It is provided as part of the Peopltools platform as an appliance, and is used with applications starting with release 9.2.  Secure Enterprise Search is a rich and powerful search product that can enhance search and navigation in PeopleSoft applications.  It also provides useful features like facets and filtering that are common in consumer search engines.Several questions have arisen about the deployment of SES and how to administer it and insure optimum performance.  People have also asked about what versions are supported on various platforms.  To address the most common of these questions, we are posting this list of tips.Platform SupportSES 11.1.2.2 does not support some of the platforms supported by PeopleTools, such as Windows 2012 and AIX 7.1. However, PeopleSoft and SES can use different operating system platforms when SES is deployed on a separate machine.SES 11.2.2.2 will have the required platform support for PT 8.53 in the future. We are planning to certify PT 8.53 once the testing is complete in 8.54 development and all platform support is released for 11.2.2.2.ArchitectureWe recommend running SES on a separate machine (from your apps) for two reasons:1.    SES bundles specific WebLogic, Java, and Oracle DB versions and might need different OS patches at a minimum than PeopleSoft. By having SES run on a different machine, these pre-requisites can be managed better through their lifecycle independenly for PeopleSoft and SES.2.    SES is resource intensive - it runs it's own WebLogic and Oracle database. By having SES run on its own machine, sufficient resources can be allocated to SES and free the PeopleSoft servers from impacts of SES load patterns.We will be providing a comprehensive red paper covering PeopleSoft/SES administration in the near future, but until that is published, we'll post tips on this blog.

    Read the article

  • Ubuntu server 11.04 recognize only 1 core instead of 4

    - by Kreker
    I searched for other questions and googled a lot but I don't find a solution for solving this problem. Ubuntu Server 11.04 64bit installed on Dell Poweredge with Intel Xeon X5450. He only recognize 1 of the 4 cores I have. Tried to modify the GRUB config but didn't work. IN the machine BIOS I didn't find anything useful. CPU root@darwin:~# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU X5450 @ 3.00GHz stepping : 10 cpu MHz : 2992.180 cache size : 6144 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 xsave lahf_lm dts tpr_shadow vnmi flexpriority bogomips : 5984.36 clflush size : 64 cache_alignment : 64 address sizes : 38 bits physical, 48 bits virtual power management: GRUB root@darwin:~# cat /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=2 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX="noapic nolapic" #was with acpi=off # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" Complete dmesg Too long, posted on pastebin http://pastebin.com/bsKPBhzu

    Read the article

  • URL Rewrite http to https EXCEPT files in a specific subfolder

    - by BrettRobi
    I am trying to force all traffic on my web site to use HTTPS, using the URL Rewrite 2.0 module added to IIS 7.5. I got that working and now have a need to exclude a couple of pages from using SSL. So I need a rule to rewrite all URL except those referencing this folder to HTTPS. I've been banging my head against the wall on this and am hoping someone can help. I tried creating a rule to match all URL except those in a nossl subfolder as in this example: <rule name="HTTP to HTTPS redirect" enabled="true" stopProcessing="true"> <match url="(/nossl/.*)" negate="true" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Found" /> </rule> But this doesn't work. Can anyone help?

    Read the article

  • All About Search Engine Position Optimization

    Search engine optimization or SEO is a method increasing the amount of traffic or hits to your website, which results in making your website rank high in search engine results. These results are produced whenever an individual types in a keyword or a set of keywords in a search query in search engines like Yahoo!, Google and the like. Being high on the list of search results matters a lot because it makes you more visible to the general public, especially to your target market. This differentiates you from your competitors who may rank low in the search results, or may not even appear in the results lists at all.

    Read the article

  • How to structure my AdWords campaign for testing and different groups of keywords?

    - by Romain Dorange
    I am starting an AdWords campaigns and I will measure conversion rates using the AdWords conversion tracking pixel. Conversion might be account creation or a concrete sale. As it will be a test campaign to have some insights on CTR, CR, etc... on the future, I am likely to try several configurations: Two different ads with different landing URL and messages: one with a focus on the product / the other will contains a discount embedded in the URL. 4 different groups or themes of keywords. I guess I have to build 4 ads groups based on the keywords 2 ads with the different messages assign the two ads to each ads groups follow the campaign precisely in the ads tabs where I can see the effectiveness of each Ads per Ads Groups (for a total of 8 lines of reporting) Also, what are the key performance indicators that I can have from an AdWords campaign to measure global effectiveness? measure of return on investment from concrete sales (tracking pixel with e-commerce tag on confirmation page) measure o return on investment from leads acquisition (tracking pixel on account creation) measure of traffic increase with the campaign

    Read the article

  • Confused about nova-network

    - by neo0
    I'm so sorry because this question doesn't related to Ubuntu. I asked in Openstack forum but this forum is not very active. So I think if someone have experience with Openstack Nova can help me with my problem. I've read some explanations about nova-network and how to configure it like this one from wiki: http://wiki.openstack.org/UnderstandingFlatNetworking I'm confusing about a detail. If every traffic from the instances must go through nova controller node, then why we still need the public interface for nova-compute node? Is it necessary? What happen when a request from outside to an instance. For example I have a controller node and a nova-compute node. In nova-compute node I run an instance with a Wordpress website. Then someone connect to the public IP of this instance. So the request go directly from router to the nova-compute node or from router to controller node then nova-compute node? Thank you!

    Read the article

  • Do search engines rank internal redirects negatively?

    - by siverd
    A client is in the late stages (code complete) of a website redesign and unfortunately hasn't implemented 301 redirects to point high traffic pages to the new URL's. As I understand it our only option at this point is to create redirects within the CMS. Our CMS allows us to do this: www.mysite.com/category/current-page.html will redirect to www.mysite.com/new-category-name/new-page.html The site now uses custom logic on our 404 page to check this list of redirects and if one exists forwards the user to the new-page.html I understand that using 301 redirects would be the correct way to maintain our page rank but I think that would require a code change which isn't possible. Question How will search engines respond to this? Will they wait until the redirect happens and allow us to keep our page rank (authority, trust, etc) or will they see the 404 page and down-rank us? Worst case...will they make our new-page.html start from a rank of "0"? Thanks for your help.

    Read the article

  • How to distribute a unique database already in production?

    - by JVerstry
    Let's assume a successful web spring application running on a MySql or PostGre kind of database. The traffic is becoming so high and the amount of data is becoming so big that a distributed dataase solution needs to be implemented. It is a scalability issue. Let's assume this application is using Hibernate and the data access layer is cleanly separated with DAO objects. What would be the best strategy to scale this database? Does anyone have hands on experience to share? Is it possible to minimize sharding code (Shard) in the application? Ideally, one should be able to add or remove databases easily. A failback solution is welcome too. I am not looking for you could go for sharding or you could go no sql kind of answers. I am looking for deeper answers from people with experience.

    Read the article

  • Hosting advice for a write-heavy dynamic website

    - by Rahul Rawat
    I have built a website using PHP and MySQL and now I am looking for a hosting service. I am expecting about a 1000 users registering and about 5-10k pageviews/day in a week's time. So which host should I opt for? It will let users submit contents of type blobs and submit around 10 pictures per users. I hope that traffic will increase so can justhost's or bluehost's shared hosting serve that purpose or should I go for more dedicated ones. Basically the site is write heavy and there are average 2-3 MySQL queries per page and it is quite dynamic. So depending on these requirements which web hosting will be optimal for me.

    Read the article

  • Why Google Analytics is displaying wrong landing pages?

    - by Salman
    I see all of my pages as Landing Pages in Google Analytics which cannot be true as I did not post those pages anywhere and I don't see any traffic hitting directly to that page. Also, I am using virtual page views on few buttons and I see those virtual pages as Landing pages too. For example, /click/request-a-quote 35000 views 35000 is too big a number to be ignored. Even if I ignore Virtual Pages Views, I see a lot of pages as Landing Pages that I am 100% sure that visitors ( atleast not so many users) are NOT hitting directly. Any advice, how to debug it? PS: I'm using the following code: var _gaq = _gaq || []; _gaq.push(['_setAccount', '<']); _gaq.push(['_setDomainName', 'none']); _gaq.push(['setLocalGifPath', '/images/_utm.gif']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview','account/phase1']);

    Read the article

  • LiveMeeting VC PowerShell PASS – Troubleshooting SQL Server with PowerShell

    - by Laerte Junior
    Guys, join me on Wednesday July 18th 12 noon EDT (GMT -4) for a presentation called Troubleshooting SQL Server With PowerShell. It will be in English, so please make allowances for this. I’m sure that you’re aware that my English is not perfect, but it is not so bad. I will do my best, you can be sure. The registration link will be available soon from PowerShell.sqlpass.org, so I hope to see you there. It will be a session without slides. Just code; pure PowerShell code. Trust me, We will see a lot of COOL stuff.Big thanks to Aaron Nelson (@sqlvariant) for the opportunity! Here are some more details about the presentation: “Troubleshooting SQL Server with PowerShell – The Next Level’ It is normal for us to have to face poorly performing queries or even complete failure in our SQL server environments. This can happen for a variety of reasons including poor Database Designs, hardware failure, improperly-configured systems and OS Updates applied without testing. As Database Administrators, we need to take precaution to minimize the impact of these problems when they occur, and so we need the tools and methodology required to identify and solve issues quickly. In this Session we will use PowerShell to explore some common troubleshooting techniques used in our day-to-day work as s DBA. This will include a variety of such activities including Gathering Performance Counters in several servers at the same time using background jobs, identifying Blocked Sessions and Reading & filtering the SQL Error Log even if the Instance is offline The approach will be using some advanced PowerShell techniques that allow us to scale the code for multiple servers and run the data collection in asynchronous mode.

    Read the article

  • How long should I keep 301 redirecting pages from a deprecated domain?

    - by ElHaix
    I had an old domain that I have deprecated, but 301 redirected all results from it to my new site. The new site is now receiving a decent amount of traffic, but I don't know if it's 301 redirected from the old site, and doing a site:[old site] still shows several thousand pages indexed. Since all pages from the old site are 301 redirected, will they ever be removed from the index, as long as the old domain name is active? As a rule of thumb, somewhere I got 90 days for any significant site changes. When is it safe to burn the old domain?

    Read the article

  • Duplicate domain names - .net and .com Create separate pages, or redirect?

    - by guisasso
    In a SEO point of view: This website has a good amount of traffic for a local business, but also ships some merchandise. While the .net domain (registered first) is associated with the local busines (google places, maps, etc...) the .com domain only redirects to the .net domain. Is it good, bad or okay to create a different page for the domain .com for example, that would be pretty simple, but would link to the 10 different categories of products that this company sells? I know links are good, so there's that, but what else is good, or bad? Thanks in advance!

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • How to store prices that have effective dates?

    - by lal00
    I have a list of products. Each of them is offered by N providers. Each providers quotes us a price for a specific date. That price is effective until that provider decides to set a new price. In that case, the provider will give the new price with a new date. The MySQL table header currently looks like: provider_id, product_id, price, date_price_effective Every other day, we compile a list of products/prices that are effective for the current day. For each product, the list contains a sorted list of the providers that have that particular product. In that way, we can order certain products from whoever happens to offer the best price. To get the effective prices, I have a SQL statement that returns all rows that have date_price_effective >= NOW(). That result set is processed with a ruby script that does the sorting and filtering necessary to obtain a file that looks like this: product_id_1,provider_1,provider_3,provider8,provider_10... product_id_2,provider_3,provider_2,provider1,provider_10... This works fine for our purposes, but I still have an itch that a SQL table is probably not the best way to store this kind of information. I have that feeling that this kind of problema has been solved previously in other more creative ways. Is there a better way to store this information other than in SQL? or, if using SQL, is there a better approach than the one I'm using?

    Read the article

  • Laptop not syncing with Ubuntu One

    - by Lolwhites
    I have a desktop running Maverick, and a laptop running Lucid. Both are supposed to be linked to my Ubuntu One account. The desktop syncs fine, but when I started the laptop for the first time in a couple of months, it wouldn't sync any more. The Ubuntu One Preferences window either reports "Synchronisation complete" even though no recent files have been downloaded nor have any test files created in the relevant synced folder been uploaded, or it says "Synchronisation in progress", which does not appear to be happening as it stays like this for ages and the lights on my router suggest no traffic is going through. Have repeatedly tried disconnecting and reconnecting, and removing the device from the account then reattaching it, all to no avail.

    Read the article

  • How can I disable recent documents in Unity?

    - by detly
    How do I disable the tracking and display of recently opened files (and whatever else is remembered) in a default installation of Ubuntu 11.10? (Note that this is not a duplicate of How can I keep recent files from appearing in Unity?, since that question and its answers are concerned with temporary and specific filtering. I want to disable it completely for a single user account.) Okay, to deflect the inevitable and expand on my motivation... While trawling the usual forums and Google results for a solution, it (unsurprisingly) seems that the near-universal use cases for this request are either browsing porn or Warhammer research. And the obvious solution to this is to create another user account to contain all evidence. However, this is not why I'm asking, and I don't say that to get all high and mighty about it, it's because this answer won't help. (Even though I really don't have any interest in Warhammer, and I have no idea how that paint pot and brush ended up in my drawer, no that's not glue on my thumb, etc.) My actual use case is that I use my personal laptop for presentations in different circles of my life. I have a user account set up with all the settings I like for presentations (shortcuts, small launcher, default associations, etc). But I don't want an accidental keystroke (or the find dialog) to display other recent presentations I've given, or the files I used in composing the presentation, or whatever. I also don't want to have to recreate this profile for every single presentation I might give. I just want a nice little isolated, memoryless, clean corner of my notebook for public display.

    Read the article

  • Can using an apt proxy (d-i mirror/http/proxy string http://mymirror) affect the installation of a .deb?

    - by Randolph
    I have been doing Ubuntu deployment using a preseed.cfg. After becoming comfortable with the packages being installed it was time to reduce download time and internet traffic by creating a mirror. I ended up doing a "partial mirror" using apt-cacher-ng and preseeding it by adding d-i mirror/http/proxy string http://mymirror to the preseed.cfg. This is where things got strange. I have a few .debs that I run as part of preseed/late_command by wgetting them and installing them with dpkg -i. The packages were installing without issue until added the proxy. With the proxy they fail to install. So does the proxy affect installing .debs during preseeded installation?

    Read the article

  • over reporting in google analytics - social media stats

    - by colmcq
    I have a client and their traffic from social media reads thus: 80% from facebook 1% from Twitter This suggestsd they are not exploiting twitter at all and this was in my presentation but my boss took it out claiming twitter stats are under reported in google analytics. I can't substantiate this claim and wonder where she got this idea from. Can anyone shed light on this? Are my stats wrong and should I disregard these figures? but 80-1 seems like one hell of an under report! thanks c

    Read the article

  • Generating AdSense ad impression?

    - by Danny
    I owned two website. Let's say as example www.first.com and www.second.com. I have two apps in Google chrome-store whose app URL is: app1 URL = www.first.com/currency_converter.php app2 URL = www.first.com/currency_converter.php respectively. Currently if user lunch/click on first app URL then I am forwarding them to my first website homepage (www.first.com), where I have AdSense code with currency converter app. In this way I am generating Google ad impression and traffic for my first website (i.e www.first.com). So is this valid way? Does I am violating any Google AdSense rule? Please provide your reply or suggestions. P.S.: Please comment, if my question is not clear, I will try to write it in some other way.

    Read the article

  • Question about SEO and Domains

    - by jasondavis
    This is my first post on here as I am mainly on Stackoverflow and Serverfault. I have been programming for at least 10 years now, have made hundreds of websites but I have just recently started getting into Design and the SEO side of sites, sad that I have been overlooking these for so many years. I have pretty good knowledge from all my years of SEO but I have never really looked into it until now. My question, I would like to build a site that targets many different key words for the search engines, for an example. Let's say I built a site about Outdoor activities called outdoorreview.com and I planned on having many sections hunting fishing Hiking camping cycling climbing etc... For best Search Engine results, how could I get the most search engine traffic to all these ares? Also how should I structure the way to get to them, outdoorreview.com/Hiking/ or hiking.outdoorreview.com ?

    Read the article

  • GA tracking utm query params after hashbang

    - by hybrid9
    We currently use a hashbang for the portion of our site that generates dynamic content which can also be deep linked. Our analytics team wants to use utm params to track the referral traffic from social networks. We are using Universal Analytics (analytics.js) as well as GTM. Will GA pick up the query parameters after the hashbang or does it always have to go before? For example: example.com/#!/some/content?utm_source=foo&utm_campaign=bar example.com?utm_source=foo&utm_campaign=bar/#!/some/content In #1, I'm concerned that the utm params won't be recorded and in #2 the page will break or the url could be incorrectly written. How does GA pull in those parameters - location.search? regex? Can I get away with using either?

    Read the article

  • No Obvious Answer - Query-Strings and Javascript

    - by nchaud
    Say I have this main page /my-site/all-my-bath-soaps which lists all my products. It has a search filter text box that uses javascript to filter the products they want to see on that page (the URL doesn't change as they filter). Now from many other parts of the site I want to navigate to this products-page and see specific products. E.g. <a href="/my-site/all-my-bath-soaps?filter='Nivea-Soap'"> will go to /all-my-bath-soaps and apply javascript filtering to see just that product and hide all dom nodes for the other products. The problem is if the user changes the text in the filter from 'Nivea-Soap' to 'Lynx' the javascript will work fine and show the new products but the URL stays at ?filter='Nivea-Soap'. Is there anything I can do about this? Of course, I don't want to reload the page with a new query string every time they change the search criteria. Somehow it'd be great to move the ?filter=... criteria into POST data instead - but how can I do this with a link I don't know...

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >