Search Results

Search found 4990 results on 200 pages for 'traffic measurement'.

Page 184/200 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • pattern for the following condition in java

    - by zahir hussain
    hi i want to know how to write pattern.. for example : the word is "AboutGoogle AdWords Drive traffic and customers to your site. Pay through Cheque, Net Banking or Credit Card. Google Toolbar Add a search box to your browser. Google SMS To find out local information simply SMS to 54664. Gmail Free email with 7.2GB storage and less spam. Try Gmail today. Our ProductsHelp Help with Google Search, Services and ProductsGoogle Web Search Features Translation, I'm Feeling Lucky, CachedGoogle Services & Tools Toolbar, Google Web APIs, ButtonsGoogle Labs Ideas, Demos, ExperimentsFor Site OwnersAdvertising AdWords, AdSenseBusiness Solutions Google Search Appliance, Google Mini, WebSearchWebmaster Central One-stop shop for comprehensive info about how Google crawls and indexes websitesSubmit your content to Google Add your site, Google SitemapsOur CompanyPress Center News, Images, ZeitgeistJobs at Google Openings, Perks, CultureCorporate Info Company overview, Philosophy, Diversity, AddressesInvestor Relations Financial info, Corporate governanceMore GoogleContact Us FAQs, Feedback, NewsletterGoogle Logos Official Logos, Holiday Logos, Fan LogosGoogle Blog Insights to Google products and cultureGoogle Store Pens, Shirts, Lava lamps©2010 Google - Privacy Policy - Terms of Service" I have to search some word... for example "google insights" so how to write the code in java... i just write small code... check my code and answer my question... that code only use for find the search word, where is that. but i need to display some words front of search word and display some words rear of search workd... similar to google search... my code is Pattern p = Pattern.compile("(?i)(.*?)"+search+""); Matcher m = p.matcher(full); String title=""; while (m.find() == true) { title=m.group(1); System.out.println(title); } the full is orignal content, search s search word... thanks and advance

    Read the article

  • use php to parse json data posted from another php file

    - by akshay.is.gr8
    my web hosting has blocked the outward traffic so i am using a free web hosting to read data and post it to my server but the problem is that my php file receives data in the $_REQUEST variable but is not able to parse it. post.php function postCon($pCon){ //echo $pCon; $ch = curl_init('http://localhost/rss/recv.php'); curl_setopt ($ch, CURLOPT_POST, 1); curl_setopt ($ch, CURLOPT_POSTFIELDS, "data=$pCon"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $d=curl_exec ($ch); echo $d."<br />"; curl_close ($ch); } recv.php <?php if(!json_decode($_REQUEST['data'])) echo "json error"; echo "<pre>"; print_r($data); echo "</pre>"; ?> every time it gives json error. but echo $_REQUEST['data'] gives the correct json data. plz help.

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • sql-server performance optimization by removing print statements

    - by AG
    We're going through a round of sql-server stored procedure optimizations. The one recommendation we've found that clearly applies for us is 'SET NOCOUNT ON' at the top of each procedure. (Yes, I've seen the posts that point out issues with this depending on what client objects you run the stored procedures from but these are not issues for us.) So now I'm just trying to add in a bit of common sense. If the benefit of SET NOCOUNT ON is simply to reduce network traffic by some small amount every time, wouldn't it also make sense to turn off all the PRINT statements we have in the stored procedures that we only use for debugging? I can't see how it can hurt performance. OTOH, it's a bit of a hassle to implement due to the fact that some of the print statements are the only thing within else clauses, so you can't just always comment out the one line and be done. The change carries some amount of risk so I don't want to do it if it isn't going to actually help. But I don't see eliminating print statements mentioned anywhere in articles on optimization. Is that because it is so obvious no one bothers to mention it?

    Read the article

  • Redirect uploaded files to another server, using nginx

    - by Serg ikS
    I am creating a web service of scheduled posts to some soc. network.Need help dealing with file uploads under high traffic. Process overview: User uploads files to SomeServer (not mine). SomeServer then responds with a JSON string. My web app should store that JSON response. Opt. 1 — Save, cURL POST, delete tmp The stupid way I made it work: User uploads files to MyWebApp; MyWebApp cURL's the file further to SomeServer, getting the response. Opt.2 — JS magic The smart way it could be perfect: User uploads the file directly to SomeServer, from within an iFrame; MyWebApp gets the response through JavaScript. But this is(?) impossible due to the 'Same Origin Policy', isn't it? Opt. 3 — nginx proxying? The better way for a production server: User uploads files to MyWebApp; nginx intercepts the file uploads and sends them directly to the SomeServer; JSON response is also intercepted by nginx and processed by MyWebApp. Does this make any sense, and what would be the nginx config for, say, /fileupload Location to proxy it to SomeServer ?

    Read the article

  • What's the most efficient way to setup a multi-lingual website

    - by Jasper De Bruijn
    Hi, I'm developing a website that will be available in different languages. It is a LAMP (Linux, Apache, MySQL, PHP) setup, and it makes use of Smarty, mostly for the template engine. The way we currently translate is by a self-written smarty plugin, which will recognize certain tags in the HTML files, and will find the corresponding tag in an earlier defined language file. The HTML could look as follows: <p>Hi, welcome to $#gamedesc;!</p> And the language file could look like this: gamedesc:Poing 2009$; welcome:this is another tag$; Which would then output <p>Hi, welcome to Poing 2009!</p> This system is very basic, but it is pretty hard to control, if I f.e. would like to keep track of what has been translated so far, or give certain users the rights to translate only certain tags. I've been looking at some alternative ways to approach this, by either replacing the text-file with XML files which could store some extra meta-data, or by perhaps storing all the texts in the database, and retrieving it there. My question is, what would be the best way to make this system both maintainable and perform well with high user-traffic? Are there perhaps any (lightweight) plugins I could take a look at?

    Read the article

  • Nhibernate multilevel hierarchy save error?

    - by nisbus
    Hi, I have a database with a 6 level hierarchy and a domain model on top of that. something like this: Category -SubCategory -Container -DataDescription | Meta data -Data The mapping I'm using follows the following pattern: <class name="Category, Sample" table="Categories"> <id name="Id" column="Id" type="System.Int32" unsaved-value="0"> <generator class="native"/> </id> <property name="Name" access="property" type="String" column="Name"/> <property name="Metadata" access="property" type="String" column="Metadata"/> <bag name="SubCategories" cascade="save-update" lazy="true" inverse="true"> <key column="Id" foreign-key="category_subCategory_fk"/> <one-to-many class="SubCategory, Sample" /> </bag> </class> <class name="SubCategory, Sample" table="SubCategories"> <id name="Id" column="Id" type="System.Int32" unsaved-value="0"> <generator class="native"/> </id> <many-to-one name="Category" class="Category, Sample" foreign-key="subCat_category_fk"/> <property name="Name" access="property" type="String"/> <property name="Metadata" access="property" type="String"/> <bag name="Containers" inverse="true" cascade="save-update" lazy="true"> <key column="Id" foreign-key="subCat_container_fk" /> <one-to-many class="Container, Sample" /> </bag> </class> <class name="Container, Sample" table="Containers"> <id name="Id" column="Id" type="System.Int32" unsaved-value="0"> <generator class="assigned"/> </id> <many-to-one name="SubCategory" class="SubCategory,Sample" foreign-key="container_subCat_fk"/> <property name="Name" access="property" type="String" column="Name"/> <bag name="DataDescription" cascade="all" lazy="true" inverse="true"> <key column="Id" foreign-key="container_ DataDescription_fk"/> <one-to-many class="DataDescription, Sample" /> </bag> <bag name="MetaData" cascade="all" lazy="true" inverse="true"> <key column="Id" foreign-key="container_metadata_cat_fk"/> <one-to-many class="MetaData, Sample" /> </bag> </class> For some reason when I try to save the category (with the subcategory, container etc. attached) I get a foreign key violation from the database. The code is something like this (Pseudo). var category = new Category(); var subCategory = new SubCategory(); var container = new Container(); var dataDescription = new DataDescription(); var metaData = new MetaData(); category.AddSubCategory(subCategory); subCategory.AddContainer(container); container.AddDataDescription(dataDescription); container.AddMetaData(metaData); Session.Save(category); Here is the log from this test : DEBUG NHibernate.SQL - INSERT INTO Categories (Name, Metadata) VALUES (@p0, @p1); select SCOPE_IDENTITY(); @p0 = 'Unit test', @p1 = 'unit test' DEBUG NHibernate.SQL - INSERT INTO SubCategories (Category, Name, Metadata) VALUES (@p0, @p1, @p2); select SCOPE_IDENTITY(); @p0 = '1', @p1 = 'Unit test', @p2 = 'unit test' DEBUG NHibernate.SQL - INSERT INTO Containers (SubCategory, Name, Frequency, Scale, Measurement, Currency, Metadata, Id) VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7); @p0 = '1', @p1 = 'Unit test', @p2 = '15', @p3 = '1', @p4 = '1', @p5 = '1', @p6 = 'unit test', @p7 = '0' ERROR NHibernate.Util.ADOExceptionReporter - The INSERT statement conflicted with the FOREIGN KEY constraint "subCat_container_fk". The conflict occurred in database "Sample", table "dbo.SubCategories", column 'Id'. The methods for adding items to objects is always as follows: public void AddSubCategory(ISubCategory subCategory) { subCategory.Category = this; SubCategories.Add(subCategory); } What am I missing?? Thanks, nisbus

    Read the article

  • HttpUtility.HtmlEncode doesn't encode everything

    - by Anthony
    I am interacting with a web server using a desktop client program in C# and .Net 3.5. I am using Fiddler to see what traffic the web browser sends, and emulate that. Sadly this server is old, and is a bit confused about the notions of charsets and utf-8. Mostly it uses Latin-1. When I enter data into the Web browser containing "special" chars, like "O p ? 8 ? ? ? ? ? ? ? ? ? ? ? ? ? ?" fiddler show me that they are being transmitted as follows from browser to server: "&#9800; &#9801; &#9802; &#9803; &#9804; &#9805; &#9806; &#9807; &#9808; &#9809; &#9810; &#9811; " But for my client, HttpUtility.HtmlEncode does not convert these characters, it leaves them as is. What do I need to call to convert "?" to &#9800; and so on?

    Read the article

  • Exposed onsite vs IFD deployments for MS Dynamics CRM

    - by Greg McGuffey
    I'm working for the first time on a MS Dyanmics CRM 4.0 project. Our company has a high number of remote employees and even more remote consultants. As such it will be necessary to make the CRM solution available over the internet. As near as I can tell, I have three options: Have everyone use a VPN to access an intranet site (typical onsite deployment). However, we have found that VPNs are far from trouble free and cause many support issues. We avoid them like the plague. Use IFD to expose the CRM on the internet. I don't know much about this except that the URL will be different than the onsite URL, which could cause some headaches (see below). Expose the CRM site by opening the site to the internet, using SSL to encrypt traffic. We currently do this with our MS sharepoint sites. I'm not sure how secure this would be (one of the reasons for this question). I'd like to avoid using both the onsite intranet deployment and the IFD together for a couple of reasons. One of the requests for the solution is use email to notify users that they've been assigned a task, and include the URL to the task within the email. For this reason. If both deployments are used, then I'll need to include two URLs and the user would need to know which to use. Which leads to the second reason, the main users of the solution split time between being in the office and being remote. Thus they would need to access the solution two different ways, and know when to use which. Bad. So, what are the advantages/disadvantages of any of these methods? Any other options? Is there any issue using IFD from within the intranet? Security issues? Thanks!

    Read the article

  • django: control json serialization

    - by abolotnov
    Is there a way to control json serialization in django? Simple code below will return serialized object in json: co = Collection.objects.all() c = serializers.serialize('json',co) The json will look similar to this: [ { "pk": 1, "model": "picviewer.collection", "fields": { "urlName": "architecture", "name": "\u0413\u043e\u0440\u043e\u0434 \u0438 \u0430\u0440\u0445\u0438\u0442\u0435\u043a\u0442\u0443\u0440\u0430", "sortOrder": 0 } }, { "pk": 2, "model": "picviewer.collection", "fields": { "urlName": "nature", "name": "\u041f\u0440\u0438\u0440\u043e\u0434\u0430", "sortOrder": 1 } }, { "pk": 3, "model": "picviewer.collection", "fields": { "urlName": "objects", "name": "\u041e\u0431\u044a\u0435\u043a\u0442\u044b \u0438 \u043d\u0430\u0442\u044e\u0440\u043c\u043e\u0440\u0442", "sortOrder": 2 } } ] You can see it's serializing it in a way that you are able to re-create the whole model, shall you want to do this at some point - fair enough, but not very handy for simple JS ajax in my case: I want bring the traffic to minimum and make the whole thing little clearer. What I did is I created a view that passes the object to a .json template and the template will do something like this to generate "nicer" json output: [ {% if collections %} {% for c in collections %} {"id": {{c.id}},"sortOrder": {{c.sortOrder}},"name": "{{c.name}}","urlName": "{{c.urlName}}"}{% if not forloop.last %},{% endif %} {% endfor %} {% endif %} ] This does work and the output is much (?) nicer: [ { "id": 1, "sortOrder": 0, "name": "????? ? ???????????", "urlName": "architecture" }, { "id": 2, "sortOrder": 1, "name": "???????", "urlName": "nature" }, { "id": 3, "sortOrder": 2, "name": "??????? ? ?????????", "urlName": "objects" } ] However, I'm bothered by the fast that my solution uses templates (an extra step in processing and possible performance impact) and it will take manual work to maintain shall I update the model, for example. I'm thinking json generating should be part of the model (correct me if I'm wrong) and done with either native python-json and django implementation but can't figure how to make it strip the bits that I don't want. One more thing - even when I restrict it to a set of fields to serialize, it will keep the id always outside the element container and instead present it as "pk" outside of it.

    Read the article

  • Analyzing Web Application Speed

    - by Amy
    I'm a bit confused because the logical/programmer brain in me says that if all things are constant, the speed of a function must be constant. I am working on a PHP web application with jqGrid as a front end for showing the data. I am testing on my personal computer, so network traffic does not apply. I make an HTTP request to a PHP function, it returns the data, and then jqGrid renders it. What has me befuddled is that sometimes, Firebug reports that this is taking 300-600 milliseconds sometimes, and sometimes, it's taking 3.68 seconds. I can run the request over and over again, with very radically different response times. The query is the same. The number of users on the system is the same. No network latency. Same code. I'm not running other applications on the computer while testing. I could understand query caching improving performance on subsequent requests, but the speed is just fluctuating wildly with no rhyme or reason. So, my question is, what else can cause such variability in the response time? How can I determine what's doing it? More importantly, is there any way to get things more consistent?

    Read the article

  • SQLite Databases and Grid Hosting

    - by jocull
    I'm considering moving my site from a GoDaddy shared hosting account to a Media Temple grid hosting account in anticipation of traffic. However, I first have some concerns with the grid hosting setup. My site stores a large personal set of data on a per-user basis (possibly 3-4MB per user). At this rate I was worried about blowing over a 1GB MySQL limit in no time. To deal with this I created distributed SQLite databases per user to store large data objects. It's worked wonderfully so far. SQLite is super fast and simple. I know that reading from and writing to files is different in a Grid Hosting environment. I need to know if this setup is going to cause serious problems. These databases are not (and will not be) highly trafficked. They are personal to the user and will only be touched maybe two locations at the same time (one updating the data hourly at the most, and one or more reading on demand). I'd like to keep this setup as getting additional space (beyond 4GB) on a MySQL database seems to be a real trouble point. Will Grid Hosting cause me serious problems? Thanks.

    Read the article

  • What is the easiest way to add compression to WCF in Silverlight?

    - by caryden
    I have a silverlight 2 beta 2 application that accesses a WCF web service. Because of this, it currently can only use basicHttp binding. The webservice will return fairly large amounts of XML data. This seems fairly wasteful from a bandwidth usage standpoint as the response, if zipped, would be smaller by a factor of 5 (I actually pasted the response into a txt file and zipped it.). The request does have the "Accept-Encoding: gzip, deflate" - Is there any way have the WCF service gzip (or otherwise compress) the response? I did find this link but it sure seems a bit complex for functionality that should be handled out-of-the-box IMHO. OK - at first I marked the solution using the System.IO.Compression as the answer as I could never "seem" to get the IIS7 dynamic compression to work. Well, as it turns out: Dynamic Compression on IIS7 was working al along. It is just that Nikhil's Web Developer Helper plugin for IE did not show it working. My guess is that since SL hands the web service call off to the browser, that the browser handles it "under the covers" and Nikhil's tool never sees the compressed response. I was able to confirm this by using Fiddler which monitors traffic external to the browser application. In fiddler, the response was, in fact, gzip compressed!! The other problem with the System.IO.Compression solution is that System.IO.Compression does not exist in the Silverlight CLR. So from my perspective, the EASIEST way to enable WCF compression in Silverlight is to enable Dynamic Compression in IIS7 and write no code at all.

    Read the article

  • Cache data in SQL CE database

    - by user93422
    Background I have an SQL CE database, that is constantly updated (every second). I have a (web) application that allows a user to look at the data in real-time. At some point a user can click "take a snapshot" button, and it will open the snapshot in a different window. And then on that form, there is "print" and "download" buttons that will either generate a page for printing, or will stream the data as CSV file - but same data snapshot has to be used, i.e. I can't go to the DB to get latest data for that. Details SQL CE dabatase is exposed through WCF web service. Snapshot consists of up to 500 records, 10 columns each. Expiration time on the snapshot of 2 hours is sufficient. It is a low-traffic application, so I don't expect more than few (5) connections at the same time. Loosing snapshot is not a big deal, user can simply generate new one. database is accessed by self-hosted WCF web service using Linq-to-SQL. Web site is ASP.NET MVC hosted on UltiDev Cassini. database, and web site are most likely be on the same box, when deployed. The entire app is intranet bound. Problem I need to cache the snapshot of the data at the moment user pressed "take a snapshot" button, so that I can use same data to generate print page, or generate a file for download. Solution 1: Each time there is a need to generate a snapshot, I will create a table in the database. Since there are no temp tables in SQL CE, I will need to clean it up myself. Solution 2: Cache the snapshot in-memory on either DB server, or web server. Question: Is there anything wrong with proposed solutions? Any different solution suggestions?

    Read the article

  • White Screen of Death (WSOD) in Browser

    - by nickyt
    Here's the specs: ASP.NET 3.5 using ASP.NET AJAX AJAX Control Toolkit jQuery 1.3.2 web services IIS6 on Windows Server 2003 SP1 SP1 SQLServer 2005 SP3 Site is SSL Here's the problem: I'm getting the White Screen of Death (WSOD) in pretty much any browser (at least FireFox and IE 7/8). We have an application that uses one popup window for updating records. Most of the time when you click on the [Edit] button to edit a record, the popup window opens and loads the update page. However, after editing records for a while, all of a sudden the popup window will open, but it stays blank and just hangs. The URL is in the address bar. Loading up Fiddler I noticed that the request for the update page is never sent which leads me to believe it's some kind of lockup on the client-side. If I copy the same URL that's in the popup window into a new browser window, the page generally loads fine. Observations: - Since the request is never sent to the server, it's definitely something client-side - Only appears to happen when there is some semblance of traffic on the site which is weird because this appears to be contained within client-side code - There is a web service being called in the background every few seconds checking if the user is logged on, but this doesn't cause the freeze. I'm really at a loss here. I've googled WSOD but not much seems to appear related to my specific WSOD. Any ideas?

    Read the article

  • Do I need to implement an XMPP server?

    - by WTFITS
    (newbie alert) I need to program a multiparty communication service for a course project, and I am considering XMPP for it. The service needs following messaging semantics: 1) server will provide a method of registering and unregistering an address such as [email protected]/SomeResource. (for now I will do it manually). 2) server will provide a method of forwarding incoming messages from, say, [email protected]/SomeResource to [email protected]/someOtherResource, assuming that the latter is registered, and a method for removing this forwarding. (for now I will do it manually). 3) anonymous clients can send messages to, say, [email protected]/someresource (one way traffic only). If there is any forwarding setup, the message will be forwarded. Finally if the address is [email protected]/someresource is registered, the message will be stored for later delivery (or immediate if a retrieving client is online - see below). If no forwarding and unregistered, message will be silently dropped. 4) clients can connect and retrieve messages from a registered address. Exact method of authenticating clients (e.g., passwords?) is yet to be determined. Eventually, I want to add support for clients to connect from a web browser so they can register/unregister and set/remove forwarding themselves. Thus, the server will have to do some non-standard switching. Will I need to implement an XMPP server for this? I guess some (or all?) of this can also be done using a XMPP client bot

    Read the article

  • Scheduling algorithm optimized to execute during low usage periods.

    - by The Rook
    Lets say there is a Web Application serving mostly one country. Because of normal sleep habits website traffic follows a Sine wave, where 1 period lasts 24 hours and the lowest part of the wave is at about midnight. Is there a scheduling algorithm optimized to execute during low usage periods? I am thinking of this as a liquid that is "pored into" this sine wave to flatten out resource usage. A ideal algorithm would take the integral of this empty space. If the same tasks need to be run daily the amount of resources consumed by previous executions could be used to predict future usage by looking at the rate in which resource usage is increasing. By knowing the amount of resources required this algorithm could fill in this empty space while leaving as much buffer as possible on either side such that its interference was reduced as much as possible. It would also be possible to detect if there isn't enough resources before execution begins, this opens the door for a cloud to help out. Does anything like this exist? Or should I build it into an existing scheduler like quartz and make it open source?

    Read the article

  • Generic ASP.NET MVC Route Conflict

    - by Donn Felker
    I'm working on a Legacy ASP.NET system. I say legacy because there are NO tests around 90% of the system. I'm trying to fix the routes in this project and I'm running into a issue I wish to solve with generic routes. I have the following routes: routes.MapRoute( "DefaultWithPdn", "{controller}/{action}/{pdn}", new { controller = "", action = "Index", pdn = "" }, null ); routes.MapRoute( "DefaultWithClientId", "{controller}/{action}/{clientId}", new { controller = "", action = "index", clientid = "" }, null ); The problem is that the first route is catching all of the traffic for what I need to be routed to the second route. The route is generic (no controller is defined in the constraint in either route definition) because multiple controllers throughout the entire app share this same premise (sometimes we need a "pdn" sometimes we need a "clientId"). How can I map these generic routes so that they go to the proper controller and action, yet not have one be too greedy? Or can I at all? Are these routes too generic (which is what I'm starting to believe is the case). My only option at this point (AFAIK) is one of the following: In the contraints, apply a regex to match the action values like: (foo|bar|biz|bang) and the same for the controller: (home|customer|products) for each controller. However, this has a problem in the fact that I may need to do this: ~/Foo/Home/123 // Should map to "DefaultwithPdn" ~/Foo/Home/abc // Should map to "DefaultWithClientId" Which means that if the Foo Controller has an action that takes a pdn and another action that takes a clientId (which happens all the time in this app), the wrong route is chosen. To hardcode these contstraints into each possible controller/action combo seems like a lot of duplication to me and I have the feeling I've been looking at the problem for too long so I need another pair of eyes to help out. Can I have generic routes to handle this scenario? Or do I need to have custom routes for each controller with constraints applied to the actions on those routes? Thanks

    Read the article

  • How to Treat Race Condition of Session in Web Application?

    - by Morgan Cheng
    I was in a ASP.NET application has heavy traffic of AJAX requests. Once a user login our web application, a session is created to store information of this user's state. Currently, our solution to keep session data consistent is quite simple and brutal: each request needs to acquire a exclusive lock before being processed. This works fine for tradition web application. But, when the web application turns to support AJAX, it turns to not efficient. It is quite possible that multiple AJAX requests are sent to server at the same time without reloading the web page. If all AJAX requests are serialized by the exclusive lock, the response is not so quick. Anyway, many AJAX requests that doesn't access same session variables are blocked as well. If we don't have a exclusive lock for each requests, then we need to treat all race condition carefully to avoid dead lock. I'm afraid that would make the code complex and buggy. So, is there any best practice to keep session data consistent and keep code simple and clean?

    Read the article

  • Oracle - UPSERT with update not executed for unmodified values

    - by Buthrakaur
    I'm using following update or insert Oracle statement at the moment: BEGIN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM; IF (SQL%ROWCOUNT = 0) THEN INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END; This runs fine except that the update statement performs dummy update if the data is same as the parameter values provided. I would not mind the dummy update in normal situation, but there's a replication/synchronization system build over this table using triggers on tables to capture updated records and executing this statement frequently for many records simply means that I'd cause huge traffic in triggers and the sync system. Is there any simple method how to reformulate this code that the update statement wouldn't update record if not necessary without using following IF-EXISTS check code which I find not sleek enough and maybe also not most efficient for this task? DECLARE CNT NUMBER; BEGIN SELECT COUNT(1) INTO CNT FROM DSMS WHERE DSM = :DSM; IF SQL%FOUND THEN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM AND (SURNAME != :SURNAME OR FIRSTNAME != :FIRSTNAME OR VALID != :VALID); ELSE INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END;

    Read the article

  • Refactoring or Rewriting Monolithic PHP Spaghetti Codebase

    - by nategood
    I've inherited a really poorly designed PHP spaghetti code project. It's been gaining a good bit of traffic recently and is starting to have performance issues on top of the poor monolithic code base. Its maxing out performance on a chunky 16GB dedicated machine when it really shouldn't be. I'm planning on doing some performance tweaks right off the bat to help the performance issue, but this still won't really help the horrible code base. The team is small but expecting to grow very soon. I've read Joel's article on the troubles of doing a complete rewrite and see the concerns. But how bad does the code base have to be before you consider a rewrite? There is PHP handling logic interjected into what one would usually consider a "view". Even worse, in some places SQL statements are in these same files! The only real separation of presentation and logic are a few PHP scripts that serve as function libraries. These scripts do most of the ORM stuff... if you can even call it that. Trying to slowly refractor this seems like a nightmare. Open to your thoughts and opinions... however not interested in hearing, "Run away, Run away!".

    Read the article

  • Ways of breaking down SQL transactional/call data into reports -- 'square data'?

    - by RizwanK
    I've got a large database of call-traffic information (although the question could be answered with any generic data set.) For instance, a row contains : call endpoint server (endpoint_name) call endpoint status (sip_disconnect_reason) call destination (destination) call completed (duration) [duration 0 is completed] call account group (account_group) It's pretty easy to run SQL reports against the data, i.e. select count(*), endpoint_name from calls where duration0 group by endpoint_name select count(*),destination from calls where blah group by destination I've been calling this filtering or breakdown reports (I get the number of calls per carrier, etc.). Add another breakdown, and you've got two breakdowns, a la select count(*), endpoint_name, sip_disconnect_reason from calls where duration=0 group by endpoint_name, sip_disconnect_reason Of course, if you keep adding breakdowns, you end up making super-large reports and slicing your data so thin that you can't extract any trends from it. So my question is this : Is there a name for this sort of method of report writing? (I've heard words like squares, slicing and breakdown reports applied to them) --- I'm looking for a Python/Reporting toolkit that I can use to make these easier to generate for my end users. aside : Are there other ways of representing transactional data that might be useful rather than the above method? Thanks,

    Read the article

  • Why is search functionality not working on this page?

    - by DaveDev
    we deliver micro-site content for our client. Our content is injected into a wrapper that is supplied by another developer. To deliver our content we host the wrapper as well as the content. The user can access this at http://fundcentre.newireland.ie/ (try a search for 'bloxham') For the other content that is not ours, the other developer hosts a similar (though slightly different) wrapper and delivers the content. the user accesses this here: http://www.newireland.ie/ (try a search for 'bloxham') The wrapper contains a search box, which does not work for us but it works for the other developer. I took a look at the network traffic with FireBug but it appears that when I do the search from the wrapper that we're hosting, I'm getting a "407 Proxy Access Denied" error. My guess is their proxy has a problem with the fact that the search is being conducted from a page hosted outside the scope of their proxy. It was also suggested that there were javascript errors on the page that were preventing the search from executing but I can't see any. Also, I don't think I'd get as far as the proxy error if that was the case. I don't really understand this stuff too well though, so could somebody with a bit more experience please take a look and maybe shed some light on this for me? Thanks.

    Read the article

  • Show/hide text based on optgroup selection using Jquery

    - by general exception
    I have the following HTML markup:- <select name="Fault" class="textbox" id="fault"> <option>Single Light Out</option> <option>Light Dim</option> <option>Light On In Daytime</option> <option>Erratic Operating Times</option> <option>Flashing/Flickering</option> <option>Causing Tv/Radio Interference</option> <option>Obscured By Hedge/Tree Branches</option> <option>Bracket Arm Needs Realigning</option> <option>Shade/Cover Missing</option> <option>Column In Poor Condition</option> <option>Several Lights Out (please state how many)</option> <option>Column Leaning</option> <option>Door Missing/Wires Exposed</option> <option>Column Knocked Down/Traffic Accident</option> <option>Lantern Or Bracket Broken Off/Hanging On Wires</option> <option>Shade/Cover Hanging Open</option> </select> <span id="faulttext" style="color:Red; display:none">Text in the span</span> This Jquery snippet adds the last 5 options into an option group. $('#fault option:nth-child(n+12)').wrapAll('<optgroup label="Urgent Reasons">'); What I want to do is, remove the display:none if any of the items within the <optgroup> are selected, effectively displaying the span message, possibly with a fade in transition, and also hide the message if any options outside of the <optgroup> are selected.

    Read the article

  • AppFabric caching's local cache isnt working for us... What are we doing wrong?

    - by Olly
    We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated. It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working. We have enabled it like this... bool UseLocalCache = int LocalCacheObjectCount = int.MaxValue; TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3); DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased; if (UseLocalCache) { configuration.LocalCacheProperties = new DataCacheLocalCacheProperties( LocalCacheObjectCount, LocalCacheDefaultTimeout, LocalCacheInvalidationPolicy ); // configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300)); } Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing). The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache. Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly. We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either. I cant find anything in SO or Google. Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think? (note: the IIS box running ASp.net isnt in yhe cluster - it is just the client). Any insights greatfully received!

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >