Search Results

Search found 530 results on 22 pages for 'visitor'.

Page 18/22 | < Previous Page | 14 15 16 17 18 19 20 21 22  | Next Page >

  • Fast Data Executive Round Table FY14 event kit

    - by JuergenKress
    We are very interested to run joint marketing events jointly with you as our partners! At our SOA Community Workspace (SOA Community membership required) you can find a new Fast Data Executive Round Table FY14 event kit. This event is designed at senior IT and executives level for the purposes of education, awareness, and thought leadership around the subject of big data; and a specific flavor of big data - Fast Data - that has begun to spark the imagination of many Oracle customers. Fast Data is not new. It’s a term that was invented initially by Ovum’s Tony Baer as a way to represent the collection of ‘high velocity’ solutions with respect to the big data. For Oracle, the Fast Data campaign in FY13 began as a way to tie a broader set of solutions together (SOA/Business Process Management, Data Integration and Business Analytics) under a set of use cases focused on real-time, high velocity data. It has helped to give Oracle a leap-frog advantage over many of the niche integration vendors (i.e. Informatica, Pega, Tibco, Software AG, Terracotta) who haven’t been able to address these types of end-to-end use cases which rely on the combination of filtering, in-memory data processing, correlation, real-time data movement and transformation, end-to-end analytics, and business process management. Only Oracle can address all the dimensions of fast data, and only Oracle can provide a set of engineered solutions to address this space. This event is designed to continue that thought leadership momentum and raise the awareness about what Oracle Fast Data solutions are designed to solve. It’s designed to highlight real customer solutions and articulate the business benefits that fast data can address. This is not an event that gets into the esoteric technical standards of Hadoop, NoSQL, and in-memory data grids. This is an event that instead gets into the heart of business problems that big data has left un-addressed and charts the path for next steps in fast data. Get the Fast Data Executive Round Table FY14 event kit here. Support marketing campaigns We can support such events by: Oracle speakers - contact your partner manager Marketing budget - contact your A&C marketing manager Event location - free use of Oracle Customer Visitor Centers conference rooms Promote your event at events.oracle.com: http://tinyurl.com/eventspecialized E-Blast: invite customers to your event – contact your A&C marketing manager For additional marketing kits e.g for Business Process Managementplease visit our SOA Community Workspace. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags:

    Read the article

  • BGP Multipath & return routes

    - by Dennis van der Stelt
    I'm probably a complete n00b concerning serverfault related questions, but our IT department makes a bold statement I wish to verify. I've searched the internet, but can find nothing related to my question, so I come here. We have Threat Management Gateway 2010 and we used to just route the request to IIS and it contained the ip address so we could see where it was coming from. But now they turned on "Requests apear to come the TMG server" so ip addresses aren't forwarded anymore. Every request has the ip of the TMG server. Now the idea behind this is that because of multipath bgp routes, the incoming request goes over RouteA, but the acknowledgement messages could return over RouteB. The claim is that because the request doesn't come from the first known source, our proxy, but instead from IIS, some smart routers at the visitor of our websites don't recognize the acknowledgement message and filter it out. In other words, the response never arrives. Again, this is the claim. But I cannot find ANY resources on the internet that support this claim. I do read about bgp multipath, but more in the case that there are alternative routes when the fastest route fails for some reason. So is the claim completely bogus or is there (some) truth to it? Can someone explain or point me to resources? Thanks in advance!

    Read the article

  • PGB Multipath & return routes

    - by Dennis van der Stelt
    I'm probably a complete n00b concerning serverfault related questions, but our IT department makes a bold statement I wish to verify. I've searched the internet, but can find nothing related to my question, so I come here. We have Threat Management Gateway 2010 and we used to just route the request to IIS and it contained the ip address so we could see where it was coming from. But now they turned on "Requests apear to come the TMG server" so ip addresses aren't forwarded anymore. Every request has the ip of the TMG server. Now the idea behind this is that because of multipath bgp routes, the incoming request goes over RouteA, but the acknowledgement messages could return over RouteB. The claim is that because the request doesn't come from the first known source, our proxy, but instead from IIS, some smart routers at the visitor of our websites don't recognize the acknowledgement message and filter it out. In other words, the response never arrives. Again, this is the claim. But I cannot find ANY resources on the internet that support this claim. I do read about pgb multipath, but more in the case that there are alternative routes when the fastest route fails for some reason. So is the claim completely bogus or is there (some) truth to it? Can someone explain or point me to resources? Thanks in advance!

    Read the article

  • High apache load but zero traffic

    - by Adie
    I have a problem with new server.. I use VPS Centos with 1GB of ram and I use wordpress CMS. The traffic <100 visitor/hour, but the apache have high load and make the server hang with zero free of ram and can't connect through ssh. I should reboot the vps to make it works here is the load on Apache looks like Tasks: 66 total, 1 running, 65 sleeping, 0 stopped, 0 zombie Cpu(s): 1.6%us, 12.3%sy, 0.0%ni, 48.1%id, 23.0%wa, 4.8%hi, 10.2%si, 0.0% Mem: 1018776k total, 116620k used, 902156k free, 1236k buffers Swap: 1048568k total, 1013052k used, 35516k free, 26628k cached 2949 apache 20 0 459m 42m 3732 D 3.0 4.2 0:09.23 httpd 2959 apache 20 0 460m 29m 3744 D 2.0 3.0 0:02.72 httpd 2968 apache 20 0 460m 26m 3808 D 2.0 2.6 0:02.27 httpd 2972 apache 20 0 460m 24m 3784 D 2.0 2.5 0:02.44 httpd 2986 apache 20 0 460m 29m 3784 R 2.0 2.9 0:02.40 httpd 2969 apache 20 0 458m 29m 3864 D 1.6 3.0 0:02.63 httpd 2974 apache 20 0 460m 25m 3820 D 1.6 2.6 0:02.43 httpd 2990 apache 20 0 460m 23m 3920 D 1.6 2.4 0:02.36 httpd 2994 apache 20 0 460m 31m 3756 D 1.6 3.2 0:02.62 httpd 2956 apache 20 0 460m 26m 3740 D 1.3 2.7 0:02.73 httpd 2957 apache 20 0 465m 22m 3644 D 1.3 2.3 0:02.80 httpd 2967 apache 20 0 458m 24m 3764 D 1.3 2.5 0:02.60 httpd 2970 apache 20 0 463m 25m 3764 D 1.3 2.6 0:03.07 httpd 2971 apache 20 0 451m 22m 3792 D 1.3 2.3 0:02.47 httpd 2973 apache 20 0 458m 25m 3768 D 1.3 2.6 0:02.52 httpd 2987 apache 20 0 465m 20m 3772 D 1.3 2.1 0:03.02 httpd But sometimes the server have uptime more than 5-10hrs but after that the problems start

    Read the article

  • Denying access to website via htaccess based on http header

    - by neekster
    I've been trying for ages to get this to work and I can't put my finger on it. What I'm trying to do is block access to a site from a number of countries, based on the CF-IPCountry header added by CloudFlare. I figured htaccess was a suitable way to do this. We are running LiteSpeed 4.2.4 on top of DirectAdmin for a control panel. The problem we having is the htaccess rule doesn't seem to do anything. Here's the rule we tried: SetEnvIf CF-IPCountry AU UnwantedCountry=1 Order allow,deny Deny from env=UnwantedCountry Allow from all That makes no difference at all, connections are still accepted. Just to check that the rule was at least being processed, I changed Allow from all to Deny from all, and connections were refused. So it appears to be a problem wit the variable. Here's the relevant headers that come in with the request. Connection: Keep-Alive Accept-Encoding: gzip CF-Connecting-IP: xx.xx.xx.xx CF-IPCountry: AU X-Forwarded-For: xx.xx.xx.xx.xx CF-RAY: c9062956e2d04b6 X-Forwarded-Proto: http CF-Visitor: {"scheme":"http"} Zone-Name: xx.com.au Hopefully someone can help me out, this has been driving me nuts for too long. Thanks

    Read the article

  • Struggling with proper way to setup Permissions on Linux/Apache Web Server

    - by Dr. DOT
    Your expert experience and assistance is great, greatly appreciated here. I have been running a LAMP server for a long time, yet I still struggle with the best way to set file & directory permissions for FTP and WWW protocol activity. My Control panel is WHM/cPanel (not that it makes a difference), and out-of-the box: files are owned by the user account setup in WHM (eg, "abc") files have a group setting of "abc" as well file permissions are created with 644 directories are owned by "abc" directories have a group setting of "abc" directories permissions are created with 0755 Again, these are the default permission settings. Now everything is fine with FTP activity, but please advise me if any of these file/directory settings create issues, especially with security. Here's where my struggle comes into play. I have PHP apps that allow a visitor to create, edit, rename, delete, etc. sub-directories and files in certain selected directories. PHP runs as "nobody" on my server. So in order to get my PHP/Web apps to work, I have had to: chown nobody * chgrp nobody * chmod 0777 * to everything in these certain & selected sub-directories. I know this is probably a huge security whole (so don't ask me for any links :) but how should I set all the permissions to allow my FTP user to do his thing while allowing the PHP apps to do their thing will also "minimizing" any security risks and exposures? I know that big CMS systems like Drupal, Joomla, WordPress and so on, handle this. Thanks ahead of time for reading through this and offering your expert advice!

    Read the article

  • Does SNI represent a privacy concern for my website visitors?

    - by pagliuca
    Firstly, I'm sorry for my bad English. I'm still learning it. Here it goes: When I host a single website per IP address, I can use "pure" SSL (without SNI), and the key exchange occurs before the user even tells me the hostname and path that he wants to retrieve. After the key exchange, all data can be securely exchanged. That said, if anybody happens to be sniffing the network, no confidential information is leaked* (see footnote). On the other hand, if I host multiple websites per IP address, I will probably use SNI, and therefore my website visitor needs to tell me the target hostname before I can provide him with the right certificate. In this case, someone sniffing his network can track all the website domains he is accessing. Are there any errors in my assumptions? If not, doesn't this represent a privacy concern, assuming the user is also using encrypted DNS? Footnote: I also realize that a sniffer could do a reverse lookup on the IP address and find out which websites were visited, but the hostname travelling in plaintext through the network cables seems to make keyword based domain blocking easier for censorship authorities.

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enought memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. that's the circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • please take a look at my server's ram usage

    - by user66779
    Hi, i am a noob with servers. I have a centos5.5 vps with 512mb ram. My goal is to have it host just one magento store. I've installed Magento on the server without any control panel, by just installing lamp myself and whatever php extensions were necessary to get Magento to install. As soon as i visit my magento store, suddenly the ram on the vps is almost completely used, with only about 100mb left. Please see this screenshot of htop taken after just myself visited the website. http://img714.imageshack.us/img714/1944/screenouv.png As you can see there's only around 100mb left. Is that normal? I'm wondering if i might have done something stupid with the server that makes it very resource hungry. I installed apache from the centos base repo, php version 5.3 from the ius repository and mysql 5.1 also from ius repo. I haven't changed any of the default config files for any of these except to make memory_minimum 256 in php.ini. Is there anything i can do to make more ram free? I'm clueless but i see each Apache daemon is using 8% of available ram, and AFAIK each visitor needs one Apache daemon. So i would run out of ram with just a handful of visitors. Thanks for your advice.

    Read the article

  • Can push technology / comet be faked?

    - by stef
    Client has a dating site and would like to have a popup (either a nice javascript overlay or a new browser window popup. we're flexible.) displayed to users when another user is visiting their page. I'm familiar with push technology and Comet but it's quite challenging to implement this and may place serious strain on a server with over 100,000 unique visitors per day. I'm wondering if there is a way of faking this, perhaps by not being accurate to the second. I can't really think of any way. This is a classic LAMP environment. Anyone? EDIT: what about this: placing an iframe on the page that refreshes every few seconds and at each page load it checks in the db if a visitor has been logged on this profile page. if so, it shows a message. the message would be visible but the background of the iframe would blend in with the background of the site and be invisible. if the message fades in and out, it would look like a JS box "popping up".

    Read the article

  • Spam proof hit counter in Django

    - by Jim Robert
    I already looked at the most popular Django hit counter solutions and none of them seem to solve the issue of spamming the refresh button. Do I really have to log the IP of every visitor to keep them from artificially boosting page view counts by spamming the refresh button (or writing a quick and dirty script to do it for them)? More information So right now you can inflate your view count with the following few lines of Python code. Which is so little that you don't even really need to write a script, you could just type it into an interactive session: from urllib import urlopen num_of_times_to_hit_page = 100 url_of_the_page = "http://example.com" for x in range(num_of_times_to_hit_page): urlopen(url_of_the_page) Solution I'll probably use To me, it's a pretty rough situation when you need to do a bunch of writes to the database on EVERY page view, but I guess it can't be helped. I'm going to implement IP logging due to several users artificially inflating their view count. It's not that they're bad people or even bad users. See the answer about solving the problem with caching... I'm going to pursue that route first. Will update with results. For what it's worth, it seems Stack Overflow is using cookies (I can't increment my own view count, but it increased when I visited the site in another browser.) I think that the benefit is just too much, and this sort of 'cheating' is just too easy right now. Thanks for the help everyone!

    Read the article

  • how to better (inambiguaously) use the terms CAPTCHA and various types of interactions?

    - by vgv8
    I am working on survey of state-of-the-art and trends of spam prevention techniques. I observe that non-intrusive, transparent to visitor spam prevention techniques (like context-based filtering or honey traps) are frequently called non-captcha. Is it correct understanding of term CAPTCHA which is "type of challenge-response [ 2 ]test used in computing to ensure that the response is not generated by a compute" [ 1 ] and challenge-response does not seem to imply obligatory human involvement. So, which understanding (definition) of term and classification I'd better to stick with? How would I better call CAPTCHA without direct human interaction in order to avoid ambiguity and confusion of terms understnding? How would I better (succinctly and unambiguously) coin the term for captchas requiring human interaction but without typing into textbox? How would I better (succinctly and unambiguously) coin the terms to mark the difference between human interaction with images (playing, drag&dropping, rearranging, clicking with images) vs. just recognizing them (and then typing into a textbox the answer without interaction with images)? PS. The problem is that recognition of a wiggled word in an image or typing the answer to question is also interaction and when I start to use the terms "interaction", "interactive", "captcha", "protection", "non-captcha", "non-interactive", "static", "dynamic", "visible", "hidden" the terms overlap ambiguously with which another (especailly because the definitions or their actual practice of usage are vague or contradictive). [ 1 ] http://en.wikipedia.org/wiki/CAPTCHA

    Read the article

  • semi dynamic cdn

    - by dwi kristianto
    i'm developing couple of websites using php (directory script, etc.) and wordpress as cms. i need to improve its performance, by using cdn for static files (css, js, images). the problem is, css and javascript files are generated on the fly. i did that due to yahoo and some expert advice to combine the files into one file. also changing basic color of css files. for the time being, i use couple of small vps but still its not fast enough. i already contact maxcdn and the support guy said that they dont have such kind of services. what i need is: a cdn that will serve the request from user/visitor and there's no file in local disk, the cdn will redirect/fetch it from another domain/server. in vps, it could be done easily using combination of .htaccess and php, but NOT in the cdn. most of cdn only support purely static files. is there any such cdn that will server semi-dynamic files?

    Read the article

  • How to Manage project in this scenario

    - by vijay.shad
    Hi All, I am working on a web application which has got good amount of static or pre-login pages. These pages can have some simple forms as well where we would like to capture the visitor's details. Post login, I have got my main application. I am confused about the development and deployment architecture of my application. Post login part of my application has a release cycle of 1-2 months while pre-login pages are to be updated on a weekly basis. It is difficult to make a release of pre-login pages as the overall war also contains post-login application & which sometimes is not release ready. Currently, I have got both these parts bundled in the single war project. Please help me by letting me know the best practices whereby I can achieve following: Manage the releases of these two parts independently. I am using Maven. So is there a way I can share the resources, such as CSS, images etc between these two parts. Header and footer of my application is going to be same on pre-login & post-login pages. I was thinking of deploying these apps as two war files in my tomcat container. But then how will I manage the common resources like Css, images etc. Rgds Vijay

    Read the article

  • My Website was hacked using Statcounter! Does Statcounter keep a record of cookies?

    - by Cyril Gupta
    I had a rather interesting case of hacking on my ASP.Net MVC website. For this website I had implemented a rather uncomplicated authentication system for my admin area -- an encrypted cookie which had an identifying signature for the member. Whenever the admin visits the website the cookie would be decrypted and signature verified. If matching he wouldn't have to sign in. Couple of days ago a visitor on my site told me that he was able to sign into my website simply by clicking no a referral link on his Statcounter console which pointed to my admin area (I had visited his site from a link inside my admin view). He just clicked on a link in statcounter and he was signed in as the admin! The only way this could have happened was if statcounter somehow recorded my cookies and used those when he clicked on the link pointing to my admin! Is that logical or fathomable? I don't understand what's going on. Do you have any suggestions as to how I can protect my website against things like this?

    Read the article

  • Javascript error: Object required...tough time fixing this

    - by rtnav2301
    The situation is this: a friend of mine asked me to help her with a website she's working on called www.KeepsakePartyInvitations.com. Her original designer created this code but didn't finish the project. So here I am to help save the day. Now, once a photo has been added to the party template and the mouse is placed over the image, an alternate text message appears to the right. The message informs the visitor how to reposition and resize the image, if necessary. I'm testing the procedure in the invitation to reposition the image. I follow the directions about moving the picture, but the mouse does not let go of the photo. Where ever the mouse moves, so does the image. The same thing occurs if you want to resize the image, you follow the directions but the image does not resize. I check the error message in IE 8 and it states: Message: Object required Line: 873 Char: 3 Code: 0 URI: http://www.keepsakepartyinvitations.com/Includes/JCode.js When I checked the Javascript file, the Line 873 states: tempX = e.clientX + document.body.scrollLeft If you review the javascript file, the function where the line is located is called: function getMouseXY(e). Please help me, it has been a long while since I've worked with Javascript and I'm motivated to get this fixed. Thank you!

    Read the article

  • sem-dynamic cdn

    - by dwi kristianto
    i'm developing couple of websites using php (directory script, etc.) and wordpress as cms. i need to improve its performance, by using cdn for static files (css, js, images). the problem is, css and javascript files are generated on the fly. i did that due to yahoo and some expert advice to combine the files into one file. also changing basic color of css files. for the time being, i use couple of small vps but still its not fast enough. i already contact maxcdn and the support guy said that they dont have such kind of services. what i need is: a cdn that will serve the request from user/visitor and there's no file in local disk, the cdn will redirect/fetch it from another domain/server. in vps, it could be done easily using combination of .htaccess and php, but NOT in the cdn. most of cdn only support purely static files. is there any such cdn that will server semi-dynamic files?

    Read the article

  • How can I validate/secure/authenticate a JavaScript-based POST request?

    - by Bungle
    A product I'm helping to develop will basically work like this: A Web publisher creates a new page on their site that includes a <script> from our server. When a visitor reaches that new page, that <script> gathers the text content of the page and sends it to our server via a POST request (cross-domain, using a <form> inside of an <iframe>). Our server processes the text content and returns a response (via JSONP) that includes an HTML fragment listing links to related content around the Web. This response is cached and served to subsequent visitors until we receive another POST request with text content from the same URL, at which point we regenerate a "fresh" response. These POSTs only happen when our cached TTL expires, at which point the server signifies that and prompts the <script> on the page to gather and POST the text content again. The problem is that this system seems inherently insecure. In theory, anyone could spoof the HTTP POST request (including the referer header, so we couldn't just check for that) that sends a page's content to our server. This could include any text content, which we would then use to generate the related content links for that page. The primary difficulty in making this secure is that our JavaScript is publicly visible. We can't use any kind of private key or other cryptic identifier or pattern because that won't be secret. Ideally, we need a method that somehow verifies that a POST request corresponding to a particular Web page is authentic. We can't just scrape the Web page and compare the content with what's been POSTed, since the purpose of having JavaScript submit the content is that it may be behind a login system. Any ideas? I hope I've explained the problem well enough. Thanks in advance for any suggestions.

    Read the article

  • Load Spikes on a Apache MySQL Server with Wordpress MU

    - by Vikram Goyal
    Hi there, I am trying to investigate the reasons for some mysterious load spikes on a Linux Apache server (2.2.14) running PHP 5.2.9 on a dedicated server with enough processing power and memory. My primary web application is a Wordpress MU (2.9.2) installation. I have investigated and ruled out DOS attack, MySQL or Apache configuration issues. The log files don't give me anything of interest, except to tell me that there is severe load. The load (which can go up to 100) just seems to come and go. It helps that I have a script that checks every 3 minutes for the load, and restarts Apache. Restarting it helps, and the server comes back, till it happens again. There seems to be no set time frame, or visitor numbers on the site that can trigger this. Even a low number of concurrent visitors (20) can trigger it. I am almost convinced that there is a rewrite loop somewhere that is causing Apache to go mad. Apache is trying to serve something that is causing it to spawn more and more processes till it keels over. My question is: Given that I am convinced that this is a rewrite issue or something similar, how can I try and figure out what the issue is? What should I monitor? Apache logs are voluminous, and not very helpful. Of course, if this is not the issue, then at least knowing what to look for will help me eliminate this as an issue and look for something else. Thanks! Vikram

    Read the article

  • Is it possible to do A/B testing by page rather than by individual?

    - by mojones
    Lets say I have a simple ecommerce site that sells 100 different t-shirt designs. I want to do some a/b testing to optimise my sales. Let's say I want to test two different "buy" buttons. Normally, I would use AB testing to randomly assign each visitor to see button A or button B (and try to ensure that that the user experience is consistent by storing that assignment in session, cookies etc). Would it be possible to take a different approach and instead, randomly assign each of my 100 designs to use button A or B, and measure the conversion rate as (number of sales of design n) / (pageviews of design n) This approach would seem to have some advantages; I would not have to worry about keeping the user experience consistent - a given page (e.g. www.example.com/viewdesign?id=6) would always return the same html. If I were to test different prices, it would be far less distressing to the user to see different prices for different designs than different prices for the same design on different computers. I also wonder whether it might be better for SEO - my suspicion is that Google would "prefer" that it always sees the same html when crawling a page. Obviously this approach would only be suitable for a limited number of sites; I was just wondering if anyone has tried it?

    Read the article

  • realtime visitors with nodejs & redis & socket.io & php

    - by orhan bengisu
    I am new to these tecnologies. I want to get realtime visitor for each products for my site. I mean a notify like "X users seeing this product". Whenever an user connects to a product counter will be increased for this product and when disconnects counter will be decreased just for this product. I tried to search a lots of documents but i got confused. I am using Predis Library for PHP. What i have done may totaly be wrong. I am not sure Where to put createClient , When to subscribe & When to unsubscribe. What I have done yet: On product detail page: $key = "product_views_".$product_id; $counter = $redis->incr($key); $redis->publish("productCounter", json_encode(array("product_id"=> "1000", "counter"=> $counter ))); In app.js var app = require('express')() , server = require('http').createServer(app) , socket = require('socket.io').listen(server,{ log: false }) , url = require('url') , http= require('http') , qs = require('querystring') ,redis = require("redis"); var connected_socket = 0; server.listen(8080); var productCounter = redis.createClient(); productCounter.subscribe("productCounter"); productCounter.on("message", function(channel, message){ console.log("%s, the message : %s", channel, message); io.sockets.emit(channel,message); } productCounter.on("unsubscribe", function(){ //I think to decrease counter here, Shold I? and How? } io.sockets.on('connection', function (socket) { connected_socket++; socket_id = socket.id; console.log("["+socket_id+"] connected"); socket.on('disconnect', function (socket) { connected_socket--; console.log("Client disconnected"); productCounter.unsubscribe("productCounter"); }); }) Thanks a lot for your answers!

    Read the article

  • Is it wise to use temporary tables?

    - by Industrial
    Hi guys, We have a mySQL database table for products. We are utilizing a cache layer to reduce database load, but we think that it's a good idea to minimize the actual data needed to be stored in the cache layer to speed up the application further. All the products in the database, that is visible to visitors have a price attached to them: The prices are stored in a different table, called prices . There are multiple price categories depending on which discount level each visitor (customer) applies to. From time to time, there are campaigns which means that a special price for each product is available. The special prices are stored in a table called specials. Is it a bad to make a temp table that binds the tables together? It would only have the neccessary information and would ofcourse be cached. -------------|-------------|------------ | productId | hasPrice | hasSpecial -------------|-------------|------------ 1 | 1 | 0 2 | 1 | 1 By doing such, it would be super easy to know if the specific product really has a price, without having to iterate through the complete prices or specials table each time a product should be listed or presented. Are temp tables a common thing for web applications or is it just bad design?

    Read the article

  • Decoupling the view, presentation and ASP.NET Web Forms

    - by John Leidegren
    I have an ASP.NET Web Forms page which the presenter needs to populate with controls. This interaction is somewhat sensitive to the page-life cycle and I was wondering if there's a trick to it, that I don't know about. I wanna be practical about the whole thing but not compromise testability. Currently I have this: public interface ISomeContract { void InstantiateIn(System.Web.UI.Control container); } This contract has a dependency on System.Web.UI.Control and I need that to be able to do things with the ASP.NET Web Forms programming model. But neither the view nor the presenter may have knowledge about ASP.NET server controls. How do I get around this? How can I work with the ASP.NET Web Forms programming model in my concrete views without taking a System.Web.UI.Control dependency in my contract assemblies? To clarify things a bit, this type of interface is all about UI composition (using MEF). It's known through-out the framework but it's really only called from within the concrete view. The concrete view is still the only thing that knows about ASP.NET Web Forms. However those public methods that say InstantiateIn(System.Web.UI.Control) exists in my contract assemblies and that implies a dependency on ASP.NET Web Forms. I've been thinking about some double dispatch mechanism or even visitor pattern to try and work around this.

    Read the article

  • Working with extra fields in an Inline form - save_model, save_formset, can't make sense of the diff

    - by magicrebirth
    Suppose I am in the usual situation where there're extra fields in the many2many relationship: class Person(models.Model): name = models.CharField(max_length=128) class Group(models.Model): name = models.CharField(max_length=128) members = models.ManyToManyField(Person, through='Membership') class Membership(models.Model): person = models.ForeignKey(Person) group = models.ForeignKey(Group) date_joined = models.DateField() invite_reason = models.CharField(max_length=64) # other models which are unrelated to the ones above.. class Trip(models.Model): placeVisited = models.ForeignKey(Place) visitor = models.ForeignKey(Person) pleasuretrip = models.Boolean() class Place(models.Model): name = models.CharField(max_length=128) I want to add some extra fields in the Membership form that gets displayed through the Inline. These fields basically are a shortcut to the instantiation of another model (Trip). Trip can have its own admin views, but these shortcuts are needed because when my project partners are entering 'Membership' data in the system they happen to have also the 'Trip' information handy (and also because some of the info in Membership can just be copied over to Trip etc. etc.). So all I want to have is two extra fields in the Membership Inline - placeVisited and pleasuretrip - which together with the Person instance will let me instantiate the Trip model in the background... I found out I can easily add extra fields to the inline view by defining my own form. But once the data have been entered, how and when to reference to them in order to perform the save operations I need to do? class MyForm(forms.ModelForm): place = forms.ModelChoiceField(required=False, queryset=Place.objects.all(), label="place",) pleasuretrip = forms.BooleanField(required=False, label="...") class MembershipInline(admin.TabularInline): model = Membership form = MyForm def save_model(self, request, obj, form, change): place = form.place pleasuretrip = form.pleasuretrip person = form.person .... # now I can create Trip instances with those data .... obj.save() class GroupAdmin(admin.ModelAdmin): model = Group .... inlines = (MembershipInline,) This doesn't seem to work... I'm also a bit puzzled by the save_formset method... maybe is that the one I should be using? Many thanks in advance for the help!!!!

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22  | Next Page >