Search Results

Search found 6198 results on 248 pages for 'traffic filtering'.

Page 198/248 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • How to Test Individual Front End Web Server

    - by ChiliYago
    My farm consists of two front end (FE) web servers that are managed by a load balancer. One FE went down so we configured the load balancer to only send traffic to the other FE. We rebuilt the failed FE and rejoined the farm which appears to have worked successfully (looking at IIS). I want to test the new FE before configuring the Load Balancer to use the new server. The approach I took was to add the IP/URL to my host file that pointed to the new server but nothing comes up. Any advice would be great. Thanks

    Read the article

  • Preparing for Rails deployment

    - by Meltemi
    Getting ready to deploy a rails project on Mac OS X Leopard Server (such that it matters). Got a few questions for someone with Rails experience: where should directory containing the project go? inside the website's root folder or out? who should "own" that directory? www? root? something/someone else? hope to continue serving static pages via Apache... would like rails app to be served by mydomain/xxx/railsapp. is there a standard naming convention for 'xxx'? not expecting too much traffic to begin with...just like to keep things as simple as possible.

    Read the article

  • What are the responsibilities of the data layer?

    - by alimac83
    I'm working on a project where I had to add a data layer to my application. I've always thought that the data layer is purely responsible for CRUD functions ie. shouldn't really contain any logic but should simply retrieve data for the business layer to manipulate. However I'm a little confused with my project because I'm not sure whether I've structured my app correctly for this scenario. Basically I'm trying to retrieve a list of products from the database that fall within a certain pricing threshold. At the moment I have a function in my data layer that basically returns all products where price min threshold and price < max threshold. But it got me thinking that maybe this is incorrect. Should the data layer simply return a list of ALL products and then the business logic do the filtering? I'm pretty confused over whether the data layer should simply provide methods that allow the business layer to get raw data or whether it should be responsible for getting filtered data too? If anyone has an article or something explaining this in detail it'd be very helpful. Thanks

    Read the article

  • Is there a way in CXF to disable the SoapCompressed header for debugging purposes?

    - by Don Branson
    I'm watching CXF service traffic using DonsProxy, and the CXF client sends an HTTP header "SoapCompressed": HttpHeadSubscriber starting... Sender is CLIENT at 127.0.0.1:2680 Packet ID:0-1 POST /yada/yada HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SoapCompressed: true Accept-Encoding: gzip,gzip;q=1.0, identity; q=0.5, *;q=0 SOAPAction: "" Accept: */* User-Agent: Apache CXF 2.2 Cache-Control: no-cache Pragma: no-cache Host: localhost:9090 Connection: keep-alive Transfer-Encoding: chunked I'd like to turn SoapCompressed off in my dev environment so that I can see the SOAP on the wire. I've searched Google and grepped the CXF source code, but don't see anything in the docs or code that reference this. Any idea how to make the client send "SoapCompressed: off" instead, without routing it through Apache HTTPD or the like? Is there a way to configure it at the CXF client, in other words?

    Read the article

  • No Cookies at second Webrequest

    - by Collin Peters
    Hello, I write a little Tool in C# with Visual Studio 2008. My Problem: I login to a website by HTTP-webrequest, I get an authentification cookie, thats all ok. Than I make a new HTTP-webrequest and add the cookies from the first request to call the next page where i can see my personal data. I see that the cookies will associated with the second request if I debug it but if I check the network traffic I see that are no Cookies transmitted at the second request. I tried many possibilities to see why i dont work but i found nothing. Does somebody have the same problem or know a solution? (Sorry for bad english)

    Read the article

  • How do I build a filtered_streambuf based on basic_streambuf?

    - by swestrup
    I have a project that requires me to insert a filter into a stream so that outgoing data will be modified according to the filter. After some research, it seems that what I want to do is create a filtered_streambuf like this: template <class StreamBuf class filtered_streambuf: public StreamBuf { ... } And then insert a filtered_streambuf<> into whichever stream I need to be filtered. My problem is that I don't know what invariants I need to maintain while filtering a stream, in order to ensure that Derived classes can work as expected. In particular, I may find I have filtered_streambufs built over other filtered_streambufs. All the various stream inserters, extractors and manipulators work as expected. The trouble is that I just can't seem to work out what the minimal interface is that I need to supply in order to guarantee that an iostream will have what it needs to work correctly. In particular, do I need to fake the movement of the protected pointer variables, or not? Do I need a fake data buffer, or not? Can I just override the public functions, rewriting them in terms of the base streambuf, or is that too simplistic?

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • Glitch when moving camera in OpenGL

    - by CG
    I am writing a tile-based game engine for the iPhone and it works in general apart from the following glitch. Basically, the camera will always keep the player in the centre of the screen, and it moves to follow the player correctly and draws everything correctly when stationary. However whilst the player is moving, the tiles of the surface the player is walking on glitch as shown: Compared to the stationary (correct): Does anyone have any idea why this could be? Thanks for the responses so far. Floating point error was my first thought also and I tried slightly increasing the size of the tiles but this did not help. Changing glClearColor to red still leaves black gaps so maybe it isn't floating point error. Since the tiles in general will use different textures, I don't know if vertex arrays can be used (I always thought that the same texture had to be applied to everything in the array, correct me if I'm wrong), and I don't think VBO is available in OpenGL ES. Setting the filtering to nearest neighbour improved things but the glitch still happens every ten frames or so, and the pixelly result means that this solution is not viable anyway. The main difference between what I'm doing now and what I've done in the past is that this time I am moving the camera rather than the stationary objects in the world (i.e. the tiles, the player is still being moved). The code I'm using to move the camera is: void Camera::CentreAtPoint( GLfloat x, GLfloat y ) { glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(x - size.x / 2.0f, x + size.x / 2.0f, y + size.y / 2.0f, y - size.y / 2.0f, 0.01f, 5.0f); glMatrixMode(GL_MODELVIEW); } Is there a problem with doing things this way and if so is there a solution?

    Read the article

  • "Mail merge"-like functionality in Dreamweaver, or in any other web editing tool?

    - by Chris Farmer
    I have inherited several related, low-traffic web sites to manage and edit. These sites are implemented with static html, and they've accrued lots of stray tags and other cruft. I want to try to clean these up and migrate them to some common page template framework to simplify design and data changes and improve overall consistency. The pages will change on the timescale of weeks, and since the current web hosting plan does not support any dynamic server technologies, I was hoping to just use Dreamweaver or some other tool to merge my content data with some templating structure. I'd like to do content updates every several days and then run the content back through my templates, resulting in new static html that I can upload to the host. Do any tools support this kind of poor-man's data-driven web application? Are there better ways to approach this problem, aside from moving to a new hosting plan and using ASP.NET or PHP?

    Read the article

  • Ways of breaking down SQL transactional/call data into reports -- 'square data'?

    - by RizwanK
    I've got a large database of call-traffic information (although the question could be answered with any generic data set.) For instance, a row contains : call endpoint server (endpoint_name) call endpoint status (sip_disconnect_reason) call destination (destination) call completed (duration) [duration 0 is completed] call account group (account_group) It's pretty easy to run SQL reports against the data, i.e. select count(*), endpoint_name from calls where duration0 group by endpoint_name select count(*),destination from calls where blah group by destination I've been calling this filtering or breakdown reports (I get the number of calls per carrier, etc.). Add another breakdown, and you've got two breakdowns, a la select count(*), endpoint_name, sip_disconnect_reason from calls where duration=0 group by endpoint_name, sip_disconnect_reason Of course, if you keep adding breakdowns, you end up making super-large reports and slicing your data so thin that you can't extract any trends from it. So my question is this : Is there a name for this sort of method of report writing? (I've heard words like squares, slicing and breakdown reports applied to them) --- I'm looking for a Python/Reporting toolkit that I can use to make these easier to generate for my end users. aside : Are there other ways of representing transactional data that might be useful rather than the above method? Thanks,

    Read the article

  • Flex SDK 3.5 - Check file mimetype

    - by Fernando
    Is there a way to get a file's mimetype in Flex SDK 3.5 without using its extension? I need to validate if an uploaded file is of a certain kind. This is for images, or documents (PDF, ODT, etc.) All the solutions I've found are by checking its extension. What if I rename a .odt file as a .jpg? Then I can upload it as an image... I should add, we are using an AIR desktop client and a Java EE server. File checking is solved on the Java side, but the idea is not to go to the server, validate the file so if it's not valid, there's no network traffic at all.

    Read the article

  • Problem: connecting to drupal on hostgator without pointed domain.

    - by Kirk
    I need to connect to drupal on a website that I'm programming on a hostgator account. The website that I'm working on currently receives a lot of traffic, and I would like to make sure that it is fully functional before I launch the new website. The old website was programmed with .asp which hostgator doesn't support, and the old webhost doesn't support drupal or .php which are integral to the new web design. Drupal 6 has been freshly installed on my website, but when I try to login, it redirects me to http://gator83.hostgator.com/?q=node&destination=node instead of the drupal control panel. Is it possible to work around this? Thank you in advance.

    Read the article

  • Complex SQL design, help/advice needed

    - by eugeneK
    Hi, i have few questions for SQL gurus in here ... Briefly this is ads management system where user can define campaigns for different countries, categories, languages. I have few questions in mind so help me with what you can. Generally i'm using ASP.NET and i want to cache all result set of certain user once he asks for statistics for the first time, this way i will avoid large round-trips to server. any help is welcomed Click here for diagram with all details you need for my questions 1.Main issue of this application is to show to the user how many clicks/impressions were and how much money he spent on campaign. What is the easiest way to get this information for him? I will also include filtering by date, date ranges and few other params in this statistics table. 2.Other issue is what happens when user will try to edit campaign. Old campaign will die this means if user set 0.01$ as campaignPPU (pay-per-unit) and next day updates it to 0.05$ all will be reset to 0.05$. 3.If you could re-design some parts of table design so it would be more flexible and easier to modify, how would you do it? Thanks... sorry for so large job but it may interest some SQL guys in here

    Read the article

  • Data access strategy for a site like SO - sorted SQL queries and simultaneous updates that affect th

    - by Kaleb Brasee
    I'm working on a Grails web app that would be similar in access patterns to StackOverflow or MyLifeIsAverage - users can vote on entries, and their votes are used to sort a list of entries based on the number of votes. Votes can be placed while the sorted select queries are being performed. Since the selects would lock a large portion of the table, it seems that normal transaction locking would cause updates to take forever (given enough traffic). Has anyone worked on an app with a data access pattern such as this, and if so, did you find a way to allow these updates and selects to happen more or less concurrently? Does anyone know how sites like SO approach this? My thought was to make the sorted selects dirty reads, since it is acceptable if they're not completely up to date all of the time. This is my only idea for possibly improving performance of these selects and updates, but I thought someone might know a better way.

    Read the article

  • Trouble creating a SQL query

    - by JoBu1324
    I've been thinking about how to compose this SQL query for a while now, but after thinking about it for a few hours I thought I'd ask the SO community to see if they have any ideas. Here is a mock up of the relevant portion of the tables: contracts id date ar (yes/no) term payments contract_id payment_date The object of the query is to determine, per month, how many payments we expect, vs how many payments we received. conditions for expecting a payment Expected payments begin on contracts.term months after contracts.date, if contracts.ar is "yes". Payments continue to be expected until the month after the first missed payment. There is one other complication to this: payments might be late, but they need to show up as if they were paid on the date expected. The data is all there, but I've been having trouble wrapping my head around the SQL query. I am not an SQL guru - I merely have a decent amount of experience handling simpler queries. I'd like to avoid filtering the results in code, if possible - but without your help that may be what I have to do. Expected Output Month Expected Payments Received Payments January 500 450 February 498 478 March 234 211 April 987 789 ... SQL Fiddle I've created an SQL Fiddle: http://sqlfiddle.com/#!2/a2c3f/2

    Read the article

  • I need help understanding how this jQuery filter function works, line-by-line, if possible

    - by user717236
    Here is the HTML: <div> <h3>text</h3> </div> <div> <h3>moretext</h3> </div> <div> <h3>123</h3> </div>?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? Here is the JS: var rv1_wlength = $("div").filter(function() { return $(this).find("h3").filter(function () { return $(this).text() != "123"; }).length; }); var rv1_wolength = $("div").filter(function() { return $(this).find("h3").filter(function () { return $(this).text() != "123"; }); }); var rv2 = $("div").find("h3").filter(function() { return $(this).text() != "123"; }); alert(rv1_wlength.text()); // text // moretext alert(rv1_wolength.text()); // text // moretext // 123 alert(rv2.text());? // textmoretext I don't understand why the first two methods print the elements on each line, whereas the second method concatenates them. "rv2" is a jQuery object. Then, what are the first two (rv1_wlength and rv1_wolength)? Furthermore, I don't understand why the inclusion of the length property makes all the difference in filtering the elements. The second method does nothing, since it returns all the elements. The first method, with the only change being the addition of the length property, correctly filters the elements. I would very much like a line-by-line explanation. I would sincerely appreciate any feedback. Thank you.

    Read the article

  • Getting batch_action functionality in Symfony 1.0

    - by Alex Ciminian
    I'm currently working on a web app written in Symfony. I'm supposed to add an "export to CSV" feature in the backend/administration part of the app for some modules. In the list view, there should be an "Export" button which should provide the user with a csv file of the elements that are displayed (considering filtering criteria). I've created a method in the actions class of the module that takes a comma separated list of ids and generates the CSV, but I'm not really sure how to add the link to it in the view. The problem is that the view doesn't exist anywhere, it's generated on the fly from the data in the generator.yml configuration file. I've posted the relevant part of the file below. I'm new to Symfony, so any help would be appreciated :). Thanks, Alex Update list: display: [id, =name, indemn, _status, _participants, _approved_, created_at] title: Lista actiuni object_actions: _edit: ~ _delete: ~ actions: _create: ~ export_csv: name: Export to CSV action: CSVExport params: id=csvActionSubmit filters: [name, county_id, _status_filter, activity_id] fields: id: name: Nr. crt. ... Thanks to your advice, I've managed to add a button that is linked to my action. The problem is that I also need to send some parameters to the action, because I may not want all the elements - filters may have been used. Unfortunately, the project is using Symfony 1.0, which doesn't support batch_actions. Currently, I'm working around this with Javascript (I parse the DOM to get the numeric ids (from the display table) and then build the link for the button. I really think there could be a better way for this.

    Read the article

  • Matching a rotated bitmap to a collage image

    - by Dmi
    Hi, My problem is that I have an image of a detailed street map. On this map, there can be a certain small image of a sign (such as a traffic light icon) rotated at any angle, maybe resized. I have this small image in a bitmap. Is there any algorithm or technique by which I can locate this bitmap if a copy of it exists, rotated and maybe resized, in the large collage image? This is similar to the problem with Augmented Reality and locating the marker image, but mine is only 2D with no perspective distortion.

    Read the article

  • OpenSSH connection trouble

    - by gnostical
    Hi, I'm trying to use Putty 0.60 to log in to an OpenSSH 5.3 server. Connections with openssh from another Linux server are possible, but Putty fails. Putty's event log tells me "software caused connection abort" right after the DH key exchange, the server log doesn't report anything (set to INFO). I analyzed the traffic with Wireshark and got a whole bunch of "TCP retransmission" and "TCP DUP ACK" after said key exchange. Sometimes I was able to log in, but at some point (usually < 2 min.) the connection froze without any logged messages. Sadly, I didn't capture a trace. The server is my own (Funtoo with genkernel and gentoo-sources 2.6.34), so I may tweak it, but I'd still like to know what causes the error. Any suggestions? Thank you!

    Read the article

  • django join-like expansion of queryset

    - by jimbob
    I have a list of Persons each which have multiple fields that I usually filter what's upon, using the object_list generic view. Each person can have multiple Comments attached to them, each with a datetime and a text string. What I ultimately want to do is have the option to filter comments based on dates. class Person(models.Model): name = models.CharField("Name", max_length=30) ## has ~30 other fields, usually filtered on as well class Comment(models.Model): date = models.DateTimeField() person = models.ForeignKey(Person) comment = models.TextField("Comment Text", max_length=1023) What I want to do is get a queryset like Person.objects.filter(comment__date__gt=date(2011,1,1)).order_by('comment__date') send that queryset to object_list and be able to only see the comments ordered by date with only so many objects on a page. E.g., if "Person A" has comments 12/3/11, 1/2/11, 1/5/11, "Person B" has no comments, and person C has a comment on 1/3, I would see: "Person A", 1/2 - comment "Person C", 1/3 - comment "Person A", 1/5 - comment I would strongly prefer not to have to switch to filtering based on Comments.objects.filter(), as that would make me have to largely repeat large sections of code in the both the view and template. Right now if I tried executing the following command, I will get a queryset returning (PersonA, PersonC, PersonA), but if I try rendering that in a template each persons comment_set will contain all their comments even if they aren't in the date range. Ideally they're would be some sort of functionality where I could expand out a Person queryset's comment_set into a larger queryset that can be sorted and ordered based on the comment and put into a object_list generic view. This normally is fairly simple to do in SQL with a JOIN, but I don't want to abandon the ORM, which I use everywhere else.

    Read the article

  • Control webcam from C#

    - by Moulde
    Hi I have a Creative Life CAM Optia AF webcam, the software included in the package is able to control the camera in different ways, like set autofocus to auto or manual, and a bunch of gamma and brightness settings. I'm capturing the feed with the AForge Computer vision library, and it's working great. But i would like to be able to set the manual focus from inside my application. Ive been searching for a tutorial, but come up empty handed. Can i somehow either disassemble the included software, or is there some way to fetch the traffic / instructions being sent to the device? Thanks in advance.

    Read the article

  • Tool for response time analysis on JBoss server?

    - by Ariel Vardi
    I am running a pretty high traffic cluster of JBoss servers serving REST requests and I am interested in tools reading the access logs in Tomcat format (with %D parameter) to provide a detailed analysis of the response time on a per-call basis. Ideally this tool would generate a chart showing the progression of the response time throughout the day, hour per hour, then a weekly view with averages on the day, and monthly with average on the weeks (CACTI style). I've looked for such tools and couldn't find anything. Is any of you guys aware of something close to that before I start writing my own? I haven't looked into CACTI extensions yet, but that be an option?

    Read the article

  • How long until programming becomes a professionally certified (and respected as such) skill such as

    - by Michael Campbell
    Given Toyota's recent issues and the ENORMOUS amount of safety that is being relegated to computers, how long do you think it will be before programming, or perhaps programming certain things (embedded transportation software, (air) traffic control, electrical grid, hospital equipment, nuclear plant security, planes, etc.) becomes regulated? And/or, how long before there will be certain regulated certifications before you can call yourself a "software developer", like Architects and Engineers have now? (NB: I'm from the US, so I don't know how it works in other countries; please forgive my ignornace.)

    Read the article

  • How to deal with a flaw in System.Data.DataTableExtensions.CopyToDataTable()

    - by andy
    Hey guys, so I've come across something which is perhaps a flaw in the Extension method .CopyToDataTable. This method is used by Importing (in VB.NET) System.Data.DataTableExtensions and then calling the method against an IEnumerable. You would do this if you want to filter a Datatable using LINQ, and then restore the DataTable at the end. i.e: Imports System.Data.DataRowExtensions Imports System.Data.DataTableExtensions Public Class SomeClass Private Shared Function GetData() As DataTable Dim Data As DataTable Data = LegacyADO.NETDBCall Data = Data.AsEnumerable.Where(Function(dr) dr.Field(Of Integer)("SomeField") = 5).CopyToDataTable() Return Data End Function End Class In the example above, the "WHERE" filtering might return no results. If this happens CopyToDataTable throws an exception because there are no DataRows. Why? The correct behavior should be to return a DataTable with Rows.Count = 0. Can anyone think of a clean workaround to this, in such a way that whoever calls CopyToDataTable doesn't have to be aware of this issue? System.Data.DataTableExtensions is a Static Class so I can't override the behavior....any ideas? Have I missed something? cheers UPDATE: I have submitted this as an issue to Connect. I would still like some suggestions, but if you agree with me, you could vote up the issue at Connect via the link above cheers

    Read the article

  • Best method to search hierarchical data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (i.e. Glasgow) or in a larger area (i.e. Scotland). The two approaches I am considering are: keep a note of children in the database for each record (i.e. cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. Use a recursive function to find all results who have a relation to the selected area. The problems I see from both are: Holding this data in the db will mean having to keep track of all changes to structure. Recursion is slow and inefficent. Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >