Search Results

Search found 4990 results on 200 pages for 'traffic measurement'.

Page 185/200 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Cookie blocked/not saved in IFRAME in Internet Explorer

    - by Piskvor
    I have two websites, let's say they're example.com and anotherexample.net. On anotherexample.net/page.html, I have an IFRAME SRC="http://example.com/someform.asp". That IFRAME displays a form for the user to fill out and submit to http://example.com/process.asp. When I open the form ("someform.asp") in its own browser window, all works well. However, when I load someform.asp as an IFRAME in IE 6 or IE 7, the cookies for example.com are not saved. In Firefox this problem doesn't appear. For testing purposes, I've created a similar setup on http://newmoon.wz.cz/test/page.php . example.com uses cookie-based sessions (and there's nothing I can do about that), so without cookies, process.asp won't execute. How do I force IE to save those cookies? Results of sniffing the HTTP traffic: on GET /someform.asp response, there's a valid per-session Set-Cookie header (e.g. Set-Cookie: ASPKSJIUIUGF=JKHJUHVGFYTTYFY), but on POST /process.asp request, there is no Cookie header at all. Edit3: some AJAX+serverside scripting is apparently capable to sidestep the problem, but that looks very much like a bug, plus it opens a whole new set of security holes. I don't want my applications to use a combination of bug+security hole just because it's easy. Edit: the P3P policy was the root cause, full explanation below.

    Read the article

  • Forms authentication: disable redirect to the login page

    - by codeka
    I have an application that uses ASP.NET Forms Authentication. For the most part, it's working great, but I'm trying to add support for a simple API via an .ashx file. I want the ashx file to have optional authentication (i.e. if you don't supply an Authentication header, then it just works anonymously). But, depending on what you do, I want to require authentication under certain conditions. I thought it would be a simple matter of responding with status code 401 if the required authentication was not supplied, but it seems like the Forms Authentcation module is intercepting that and responding with a redirect to the login page instead. What I mean is, if my ProcessRequest method looks like this: public void ProcessRequest(HttpContext context) { Response.StatusCode = 401; Response.StatusDescription = "Authentication required"; } Then instead of getting a 401 error code on the client, like I expect, I'm actually getting a 302 redirect to the login page. For nornal HTTP traffic, I can see how that would be useful, but for my API page, I want the 401 to go through unmodified so that the client-side caller can respond to it programmatically instead. Is there any way to do that?

    Read the article

  • Creating C++ client app for some abstract windows server - how to manage TCP connection to server speed?

    - by Kabumbus
    So we have some server with some address port and ip. we are developing that server so we can implement on it what ever we need for help. What are standard/best practices for data transfer speed management between C++ windows client app and server (C++)? My main point is in how to get how much data can be uploaded/downloaded from/to client via his low speed network to my relatively super fast server. (I need it for set up of his live stream Audio/Video bit rate) My try on explaining number 3. We do not care how fast is our server. It is always faster than needed. We care about client tyring to stream out to our server his media. he streams encoded (via ffmpeg) live video data to our server. But he has say ADSL with 500kb/s of outgoing traffic. Also he uses some ICQ or what so ever so he has less than 500 kb/s per second. And he wants to stream live video! So we need to set up our ffmpeg to encode video with respect to the bit rate user can provide. We develop server side and client side. We need a way of finding out how much user can upload per second currently (so value can change dynamically over time)

    Read the article

  • ASP.Net Session Storage provider in 3-layer architecture

    - by Tedd Hansen
    I'm implementing a custom session storage provider in ASP.Net. We have a strict 3-layer architecture and therefore the session storage needs to go through the business layer. Presentation-Business-Database. The business layer is accessed through WPF. The database is MSSQL. What I need is (in order of preference): A commercial/free/open source product that solves this. The source code of a SqlSessionStateStore (custom session store) (not the ODBC-sample on MSDN) that I can modify to use a middle layer. I've tried looking at .Net source through Reflector, but the code is not usable. Note: I understand how to do this. I am looking for working samples, preferably that has been proven to work fine under heavy load. The ODBC sample on MSDN doesn't use the (new?) stored procs that the build in SqlSessionStateStore uses - I'd like to use these if possible (decreases traffic). Edit1: To answer Simons question on more info: ASP.Net Session()-object can be stored in either InProc, ASP.Net State Service or SQL-server. In a secure 3-layer model the presentation layer (web server) does not have direct/physical access to the database layer (SQL-server). And even without the physical limitations, from an architectural standpoint you may not want this. InProc and ASP.Net State Service does not support load balancing and doesn't have fault tolerance. Therefore the only option is to access SQL through webservice middle layer (business layer).

    Read the article

  • What runs faster? Wordpress or Drupal 6.x?

    - by electblake
    So... I run a pretty large Wordpress blog. Currently it gets around 20k+ pageviews a day, and its always a struggle to keep the bad boy running quickly - I currently run a vps.net with CentOS 5.3 I am also Drupal developer by trade so I love the CMS Framework for its versatility and the portability (I can take work from one site and implement on another with great ease) MY QUESTION IS: What is faster then? Wordpress 3.x & Drupal 6.x I'd love to migrate my site to Drupal to be able to roll out new features etc (which I find awkward to do in Wordpress) but I am scared that Drupal may not be able to handle the traffic. Any opinions? I know that some major players use Drupal - as Dries documents well on his blog but I'm not under any illusions that Drupal can be a real hog. Thanks for any/all help! Please try to avoid server optimization talk unless it pertains to Wordpress or Drupal 6.x specifically, I love to learn more about optimizations but I do want to sort out which platform is quicker :) p.s - I realize the fastest option is to use a lower-level framework (with less overhead) like CakePHP etc but assume that isn't an option ;)

    Read the article

  • MouseListener fired without checking JCheckBox

    - by Morinar
    This one is pretty crazy: I've got an AppSight recording (for those not familiar, it's a recording of what they did including keyboard/mouse input + network traffic, etc) of a customer reproducing a bug. Basically, we've got a series of items listed on the screen with JCheckBox-es down the left side. We've got a MouseListener set for the JPanel that looks something like this: private MouseAdapter createMouseListener() { return new MouseAdapter(){ public void mousePressed( MouseEvent e ) { if( e.getComponent() instanceof JCheckBox ) { // Do stuff } } }; } Based on the recording, it appears very strongly that they click just above one of the checkboxes. After that, it's my belief that this listener fired and the "Do stuff" block happened. However, it did NOT check the box. The user then saw that the box was unchecked, so they clicked on it. This caused the "Do stuff" block to fire again, thus undoing what it had done the first time. This time, the box was checked. Therefore, the user thinks that the box is checked, and it looks like it is, but our client thinks that the box is unchecked as it was clicked twice. Is this possible at all? For the life of me, I can't reproduce it or see how it could be possible, but based on the recording and the data the client sent to the server, I can't see any other logical explanation. Any help, thoughts, and or ideas would be much appreciated.

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

  • Sql Server Replication: Snapshot vs Merge

    - by Zyphrax
    Background information Let's say I have two database servers, both SQL Server 2008. One is in my LAN (ServerLocal), the other one is on a remote hosting environment (ServerRemote). I have created a database on ServerLocal and have an exact copy of that database on ServerRemote. The database on ServerRemote is part of a web application and I would like to keep it's data up-to-date with the data in the database ServerLocal. ServerLocal is able to communicate with ServerRemote, this is one-way traffic. Communication from ServerRemote to ServerLocal isn't available. Current solution I thought it would be a nice solution to use replication. So I've made ServerLocal a publisher and subscriptions are pushed to the ServerRemote. This works fine, when a snapshot is transfered to ServerRemote the existing data will be purged and the ServerRemote database is once again an exact replica of the database on ServerLocal. The problem Records that exist on ServerRemote that don't exist on ServerLocal are removed. This doesn't matter for most of my tables but in some of my tables I'd like to keep the existing data (aspnet_users for instance), and update the records if necessary. What kind of replication fits my problem?

    Read the article

  • what is the best way to optimize my json on an asp.net-mvc site

    - by ooo
    i am currently using jqgrid on an asp.net mvc site and we have a pretty slow network (internal application) and it seems to be taking the grid a long time to load (the issue is both network as well as parsing, rendering) I am trying to determine how to minimized what i send over to the client to make it as fast as possible. Here is a simplified view of my controller action to load data into the grid: [AcceptVerbs(HttpVerbs.Get)] public ActionResult GridData1(GridData args) { var paginatedData = applications.GridPaginate(args.page ?? 1, args.rows ?? 10, i => new { i.Id, Name = "<div class='showDescription' id= '" + i.id+ "'>" + i.Name + "</div>", MyValue = GetImageUrl(_map, i.value, "star"), ExternalId = string.Format("<a href=\"{0}\" target=\"_blank\">{1}</a>", Url.Action("Link", "Order", new { id = i.id }), i.Id), i.Target, i.Owner, EndDate = i.EndDate, Updated = "<div class='showView' aitId= '" + i.AitId + "'>" + GetImage(i.EndDateColumn, "star") + "</div>", }) return Json(paginatedData); } So i am building up a json data (i have about 200 records of the above) and sending it back to the GUI to put in the jqgrid. The one thing i can thihk of is Repeated data. In some of the json fields i am appending HTML on top of the raw "data". This is the same HTML on every record. It seems like it would be more efficient if i could just send the data and "append" the HTML around it on the client side. Is this possible? Then i would just be sending the actual data over the wire and have the client side add on the rest of the HTML tags (the divs, etc) be put together. Also, if there are any other suggestions on how i can minimize the size of my messages, that would be great. I guess at some point these solution will increase the client side load but it may be worth it to cut down on network traffic.

    Read the article

  • How to check if new version of Chrome is available?

    - by serg
    I am trying to build an extension that would notify a user when new version of Chrome is available. I tried to inspect network traffic when Chrome is checking for an update and it is sending a request to http://74.125.95.113/service/update2?w=3:{long_encoded_string} page that returns XML with information I need: <?xml version="1.0" encoding="UTF-8"?> <gupdate xmlns="http://www.google.com/update2/response" protocol="2.0" server="prod"> <daystart elapsed_seconds="31272"/> <app appid="{8A69D345-D564-463C-AFF1-A69D9E530F96}" status="ok"> <updatecheck status="noupdate"/> <ping status="ok"/> </app> </gupdate> Besides sending {long_encoded_string} as URL parameter it is also sending some encoded cookie. Maybe someone familiar with Chrome build process can shed some light on those encoded strings and how to build them? Maybe there is another easier way (I have a feeling that string encoding is a dead end for me)?

    Read the article

  • Wicket application + Apache + mod_jk - AJP queues are filling up!

    - by nojyarg
    Dear community, We are having a Wicket-based Java application deployed in a production server cluster using Apache (2.2.3) with mod_jk (1.2.30) as load balancing component w/ sticky session and Jboss 5 as application container for the Java application. We are inconsistently seeing an issue in our production environment where our AJP queues between Apache and Jboss as shown in the JMX console fill up with requests to the point where the application server is no longer taking on any new requests. When looking at all involved system components (overall traffic, load db, process list db, load of all clustered application server nodes) nothing points towards a capacity issue which would explain why the calls are being stalled in the AJP queue. Instead all systems appear sufficiently idle. So far, our only remedy to this issue is to restart the appservers and the load balancer which only occasionally clears the AJP queues. We are trying to figure out why the queues are filling up to the point that no calls get returned to the end user although the system is not under a high load. Has anyone else experienced similar problems? Are there any other system metrics we should monitor that could explain the queuing behavior? Is this potentially a mod_jk issue? If so, is it advisable to swap mod_jk with mod_cluster to resolve the issue? Any advice is highly appreciated. If I can provide additional information for the sake of troubleshooting I would be more than willing to do so. /Ben

    Read the article

  • g-wan - reproducing the performance claims

    - by user2603628
    Using gwan_linux64-bit.tar.bz2 under Ubuntu 12.04 LTS unpacking and running gwan then pointing wrk at it (using a null file null.html) wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1:8080/null.html Running 20s test @ http://127.0.0.1:8080/null.html 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 11.65s 5.10s 13.89s 83.91% Req/Sec 3.33k 3.65k 12.33k 75.19% 125067 requests in 20.01s, 32.08MB read Socket errors: connect 0, read 37, write 0, timeout 49 Requests/sec: 6251.46 Transfer/sec: 1.60MB .. very poor performance, in fact there seems to be some kind of huge latency issue. During the test gwan is 200% busy and wrk is 67% busy. Pointing at nginx, wrk is 200% busy and nginx is 45% busy: wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1/null.html Thread Stats Avg Stdev Max +/- Stdev Latency 371.81us 134.05us 24.04ms 91.26% Req/Sec 72.75k 7.38k 109.22k 68.21% 2740883 requests in 20.00s, 540.95MB read Requests/sec: 137046.70 Transfer/sec: 27.05MB Pointing weighttpd at nginx gives even faster results: /usr/local/bin/weighttp -k -n 2000000 -c 500 -t 3 http://127.0.0.1/null.html weighttp - a lightweight and simple webserver benchmarking tool starting benchmark... spawning thread #1: 167 concurrent requests, 666667 total requests spawning thread #2: 167 concurrent requests, 666667 total requests spawning thread #3: 166 concurrent requests, 666666 total requests progress: 9% done progress: 19% done progress: 29% done progress: 39% done progress: 49% done progress: 59% done progress: 69% done progress: 79% done progress: 89% done progress: 99% done finished in 7 sec, 13 millisec and 293 microsec, 285172 req/s, 57633 kbyte/s requests: 2000000 total, 2000000 started, 2000000 done, 2000000 succeeded, 0 failed, 0 errored status codes: 2000000 2xx, 0 3xx, 0 4xx, 0 5xx traffic: 413901205 bytes total, 413901205 bytes http, 0 bytes data The server is a virtual 8 core dedicated server (bare metal), under KVM Where do I start looking to identify the problem gwan is having on this platform ? I have tested lighttpd, nginx and node.js on this same OS, and the results are all as one would expect. The server has been tuned in the usual way with expanded ephemeral ports, increased ulimits, adjusted time wait recycling etc.

    Read the article

  • Implementing client callback functionality in WCF

    - by PoweredByOrange
    The project I'm working on is a client-server application with all services written in WCF and the client in WPF. There are cases where the server needs to push information to the client. I initially though about using WCF Duplex Services, but after doing some research online, I figured a lot of people are avoiding it for many reasons. The next thing I thought about was having the client create a host connection, so that the server could use that to make a service call to the client. The problem however, is that the application is deployed over the internet, so that approach requires configuring the firewall to allow incoming traffic and since most of the users are regular users, that might also require configuring the router to allow port forwarding, which again is a hassle for the user. My third option is that in the client, spawns a background thread which makes a call to the GetNotifications() method on server. This method on the server side then, blocks until an actual notification is created, then the thread is notified (using an AutoResetEvent object maybe?) and the information gets sent to the client. The idea is something like this: Client private void InitializeListener() { Task.Factory.StartNew(() => { while (true) { var notification = server.GetNotifications(); // Display the notification. } }, CancellationToken.None, TaskCreationOptions.LongRunning, TaskScheduler.Default); } Server public NotificationObject GetNotifications() { while (true) { notificationEvent.WaitOne(); return someNotificationObject; } } private void NotificationCreated() { // Inform the client of this event. notificationEvent.Set(); } In this case, NotificationCreated() is a callback method called when the server needs to send information to the client. What do you think about this approach? Is this scalable at all?

    Read the article

  • Random problem connecting to MySQL

    - by CharlesLeaf
    Environment: RHEL 5 servers, MySQL 5.1.43, PHP 5.1.6 (using MySQLi). Currently only available within our internal VPN network. Servers ServerA: Webserver ServerB/C/D: Database server (1 master 2 slaves) The error (on ServerA) [Tue May 25 11:12:17 2010] [error] [client CLIENTIP] PHP Warning: mysqli::real_connect() [function.mysqli-real-connect]: (HY000/2003): Can't connect to MySQL server on 'ServerB' (4) in /home/**/Database.php on line 67, referer: [website] Problem description It appears that at completely random times, our website is unable to connect to one of the MySQL servers - usually the Master. Except for the forementioned error message, there is nothing to be found in any of the logs as far as I can see, and most of the times the connection is succesful and everything works as it should. It's just at completely random times, this error pops up. There's no firewall blocking any internal traffic, timeout value is 3 but it doesn't take 3 seconds before it fails to connect. With the default mysql client I can connect from ServerA to ServerB,C and D and haven't encountered a problem yet. Does anyone have a clue what I might be overlooking / could be the problem? Because I've run out of ideas myself.

    Read the article

  • Redeploying an ASP.NET site in IIS7 without files in use interfering

    - by fyjham
    Hey, We've got a process currently which causes ASP.NET websites to be redeployed. The code is itself an ASP.NET application. The current method, which has worked for quite a while, is simply to loop over all the files in one folder and copy them over the top of the files in the webroot. The problem that's arisen is that occasionally files end up being in use and hence can't be copied over. This has in the past been intermittent to the point it didn't matter but on some of our higher traffic sites it happens the majority of the time now. I'm wondering if anyone has a workaround or alternative approach to this that I haven't thought of. Currently my ideas are: Simply retry each file until it works. That's going to cause errors for a short time though which isn't really that good. Deploy to a new folder and update IIS's webroot to the new folder. I'm not sure how to do this short of running the application as an administrator and running batch files, which is very untidy. Does anyone know what the best way to do this is, or if it's possible to do #2 without running the publishing application as a user who has admin access (Willing to grant it special privileges, but I'd prefer to stop short of administrator)?

    Read the article

  • Are there any NOSQL-compatible CMS projects?

    - by Michael
    This question is partially related to an older question (Any CMS is Google App Engine compatible?) , but is slightly more general. It seems that in most CMS systems, the most fragile failure point is the database. Traditional database implementations scale poorly and will never be able to handle unforeseen spikes of traffic. Since Google App Engine was designed to help even small businesses overcome that problem, I had the same question that was asked earlier this year with less than satisfactory answers. But more generally, where are the CMS projects that support NOSQL databases? Looking over Wikipedia's list of CMS platforms, I see without much effort that only traditional RDBMS are supported by every single vendor on the list. I would have expected to see at least one or two projects handling CouchDB or similar engines. I understand the complexities of implementing a NOSQL solution to a problem that is typically solved using the relations cleanly expressed in any RDBMS, but there seems to be a rather wide market gap. Since databases are, today, easily outsourced to Google, Amazon, and others which use NOSQL models, I am amazed that there are not more projects actively pursuing this path. Am I simply not aware? Can someone please point me to projects that have real momentum that are developing on this path? I'm looking for two things: a CMS that has as its backbone a NOSQL database enabling easy database outsourcing (hosted MySQL clusters and similar solutions are not what I'm looking for) a project that is built to run on either a PaaS architecture like Google App Engine or an IaaS architecture like Amazon EC2 Any pointers in that direction would be most welcome.

    Read the article

  • How to configure Server Topology for exposing an internal application for external access?

    - by ronaldwidha
    Hi All, I guess this question is bordering to a Server Fault question. I'd like to know the best configuration for exposing an internal application (in this case a load balanced Asp.Net MVC application) for external access. More details about the situation: The Asp.Net MVC Application is currently running on 2 servers The 2 servers are behind a Windows Network Load Balancer All the servers are on premise/internal network I'm thinking of introducing an F5 Load balancer on off premise DMZ to replace the Windows Network Load Balancer. F5 will act as the public traffic gateway and load balancers to the 2 servers. However, I'd like the internal users to not have go through the Internet to access the app. The idea that I have so far is to keep both Windows Network Load Balancer and the F5. Each appliance will have its own IP and will have its own domain name. External users can use the public domain name which will hit F5, whereas internal users can use the internal domain name which will hit the Windows Network Load Balancer. Is this a good idea? Or is there a better way of doing this?

    Read the article

  • SQL Group by Minute- Expanded

    - by Barnie
    I am working on something similar to this post here: TS SQL - group by minute However mine is pulling from an message queue, and I need to see an accurate count of the amount of traffic the Message Queue is creating/ sending, and at what time Select * From MessageQueue mq My expanded version of this though is the following: A) User defines a start time and an end time (Easy enough using Declare's @StartTime and @EndTime B) Give the user the option of choosing the "grouping". Will it be broken out by 1 minutes, 5 minutes, 15 minutes, or 30 minutes (Max). (I had thought to do this with a CASE statement, but my test problems fall apart on me.) C) Display the data to accurately show a count of what happened during the interval (Grouping) selected. This is where I am at so far SQL Blob: DECLARE @StartTime datetime DECLARE @EndTime datetime SELECT DATEPART(n, mq.cre_date)/5 as Time --Trying to just sort by 5 minute intervals ,CONVERT(VARCHAR(10),mq.Cre_Date,101) ,COUNT(*) as results FROM dbo.MessageQueue mq WHERE mq.cre_date BETWEEN @StartDate AND @EndDate GROUP BY DATEPART(n, mq.cre_date)/5 --Trying to just sort by 5 minute intervals , eq.Cre_Date This is the output I would like to achieve: [Time] [Date] [Message Count] 1300 06/26/2012 5 1305 06/26/2012 1 1310 06/26/2012 100

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

  • Scalability 101: How can I design a scalable web application using PHP?

    - by Legend
    I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it? Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components: There is a load balancer that first receives requests and then decides where to route each request This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache A blackbox that handles database replication Specifically, I am trying to do the following: Setting up a load balancer (my homework revealed that HAProxy is one such load balancer) Setting up replication so that databases can be synchronized Using memcached Configuring Apache to work with multiple web servers Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage) Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally. I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?

    Read the article

  • GridView ObjectDataSource LINQ Paging and Sorting using multiple table query.

    - by user367426
    I am trying to create a pageing and sorting object data source that before execution returns all results, then sorts on these results before filtering and then using the take and skip methods with the aim of retrieving just a subset of results from the database (saving on database traffic). this is based on the following article: http://www.singingeels.com/Blogs/Nullable/2008/03/26/Dynamic_LINQ_OrderBy_using_String_Names.aspx Now I have managed to get this working even creating lambda expressions to reflect the sort expression returned from the grid even finding out the data type to sort for DateTime and Decimal. public static string GetReturnType<TInput>(string value) { var param = Expression.Parameter(typeof(TInput), "o"); Expression a = Expression.Property(param, "DisplayPriceType"); Expression b = Expression.Property(a, "Name"); Expression converted = Expression.Convert(Expression.Property(param, value), typeof(object)); Expression<Func<TInput, object>> mySortExpression = Expression.Lambda<Func<TInput, object>>(converted, param); UnaryExpression member = (UnaryExpression)mySortExpression.Body; return member.Operand.Type.FullName; } Now the problem I have is that many of the Queries return joined tables and I would like to sort on fields from the other tables. So when executing a query you can create a function that will assign the properties from other tables to properties created in the partial class. public static Account InitAccount(Account account) { account.CurrencyName = account.Currency.Name; account.PriceTypeName = account.DisplayPriceType.Name; return account; } So my question is, is there a way to assign the value from the joined table to the property of the current table partial class? i have tried using. from a in dc.Accounts where a.CompanyID == companyID && a.Archived == null select new { PriceTypeName = a.DisplayPriceType.Name}) but this seems to mess up my SortExpression. Any help on this would be much appreciated, I do understand that this is complex stuff.

    Read the article

  • How to use urlencoded urls with the GA tracking code

    - by Fake51
    I've got a site where a booking page has an iframe embedded with the actual booking form. I need to track traffic from the parent site to the child iframe. This should all work just fine with the normal GA code, using javascript like: <script type="text/javascript"> try { var pageTracker = _gat._getTracker("<UA CODE HERE>"); pageTracker._setDomainName("none"); pageTracker._setAllowLinker(true); pageTracker._setAllowHash(false); pageTracker._trackPageview(); } catch(err) {}</script> And then ofcourse using the _getLinkerUrl() function to get a url with the proper parameters. So far so good - this basically works (at least I know the principle works as I've got it working on other pages). However, and this is the problem: the server that serves up the page in the iframe was configured by a complete and utter moron (or, alternatively, created by a complete and utter moron). It chokes on '=' characters, so in order to request the iframe page I need to urlencode the '=' signs - but the GA code seems unable to parse the url when this is done. So the questions: 1. has anyone come across this? 2. does anyone know of any solutions to this problem?

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • Expose webservice directly to webclients or keep a thin server-side script layer in between?

    - by max
    Hi, I'm developing a REST webservice (Java, Jersey). The people I'm doing this for want to directly access the webservice via Javascript. Some instinct tells me this is not a good idea, but I cannot really explain that instinct. My natural approach would have been to have the webservice do the real logic and database access, but also have some (relatively thin) server-side script layer (e.g. in PHP). Clients would talk to the PHP layer which in turn would talk to the webservice. (The webservice would be pretty local to the apache/PHP server and implicitly trust calls from the script layer. The script layer would take care of session management.) (Btw, I am not talking about just hiding the webservice behind an Apache which simply redirects calls.) But as I find myself at a lack of words/arguments to explain my instinct, I wonder whether my instinct is right - note that while I have been developing all kinds of software in all kinds of languages and frameworks for like 17 years, this is the first time I develop a webservice. So my question is basically: what are your opinions? Are there any standard setups? Is my instinct totally wrong? Or partially? ;P Many thanks, Max PS: I might add a few bits of information about the planned usage of the whole application: will be accessed by different kinds of users, partly general public, partly privileged thus, all major OS/browser combinations can be expected as clients however, writing the client is not my responsibility will potentially have very high load/traffic logic of webservice will later be massively expanded for another product which is basically a superset of the functionality of the current project there is a significant likelihood that at some point an API should be exposed which can be used by 3rd party developers - obviously, with some restrictions at some point, the public view of the product should become accessible via smartphones, too (in other words, maybe a customized version of the site to adapt to the smaller display and different input methods)

    Read the article

  • Why does PDO print my password when the connection fails?

    - by Joe Hopfgartner
    I have a simple website where I establish a connection to a Mysql server using PDO. $dbh = new PDO('mysql:host=localhost;dbname=DB;port=3306', 'USER', 'SECRET',array(PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES utf8")); I had some traffic on my site and the servers connection limit was reached, and the website throw this error, with my PLAIN password in it! Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[08004] [1040] Too many connections' in /home/premiumize-me/html/index.php:64 Stack trace: #0 /home/premiumize-me/html/index.php(64): PDO-__construct('mysql:host=loca...', 'USER', 'SECRET', Array) #1 {main} thrown in /home/premiumize-me/html/index.php on line 64 Ironically I switched to PDO for security reasons, this really shocked me. Because this exact error is something you can provoke very easily on most sites using simple http flooding. I now wrapped my conenction into a try/catch clause, but still. I think this is catastrophic! So I am new to PDO and my questino is: What do I have to consider to be safe! How to I establish a connection in a secure way? Are there other known security holes like this one that I have to be aware of?

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >