Search Results

Search found 5137 results on 206 pages for 'i like traffic lights'.

Page 191/206 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Redeploying an ASP.NET site in IIS7 without files in use interfering

    - by fyjham
    Hey, We've got a process currently which causes ASP.NET websites to be redeployed. The code is itself an ASP.NET application. The current method, which has worked for quite a while, is simply to loop over all the files in one folder and copy them over the top of the files in the webroot. The problem that's arisen is that occasionally files end up being in use and hence can't be copied over. This has in the past been intermittent to the point it didn't matter but on some of our higher traffic sites it happens the majority of the time now. I'm wondering if anyone has a workaround or alternative approach to this that I haven't thought of. Currently my ideas are: Simply retry each file until it works. That's going to cause errors for a short time though which isn't really that good. Deploy to a new folder and update IIS's webroot to the new folder. I'm not sure how to do this short of running the application as an administrator and running batch files, which is very untidy. Does anyone know what the best way to do this is, or if it's possible to do #2 without running the publishing application as a user who has admin access (Willing to grant it special privileges, but I'd prefer to stop short of administrator)?

    Read the article

  • Are there any NOSQL-compatible CMS projects?

    - by Michael
    This question is partially related to an older question (Any CMS is Google App Engine compatible?) , but is slightly more general. It seems that in most CMS systems, the most fragile failure point is the database. Traditional database implementations scale poorly and will never be able to handle unforeseen spikes of traffic. Since Google App Engine was designed to help even small businesses overcome that problem, I had the same question that was asked earlier this year with less than satisfactory answers. But more generally, where are the CMS projects that support NOSQL databases? Looking over Wikipedia's list of CMS platforms, I see without much effort that only traditional RDBMS are supported by every single vendor on the list. I would have expected to see at least one or two projects handling CouchDB or similar engines. I understand the complexities of implementing a NOSQL solution to a problem that is typically solved using the relations cleanly expressed in any RDBMS, but there seems to be a rather wide market gap. Since databases are, today, easily outsourced to Google, Amazon, and others which use NOSQL models, I am amazed that there are not more projects actively pursuing this path. Am I simply not aware? Can someone please point me to projects that have real momentum that are developing on this path? I'm looking for two things: a CMS that has as its backbone a NOSQL database enabling easy database outsourcing (hosted MySQL clusters and similar solutions are not what I'm looking for) a project that is built to run on either a PaaS architecture like Google App Engine or an IaaS architecture like Amazon EC2 Any pointers in that direction would be most welcome.

    Read the article

  • How to configure Server Topology for exposing an internal application for external access?

    - by ronaldwidha
    Hi All, I guess this question is bordering to a Server Fault question. I'd like to know the best configuration for exposing an internal application (in this case a load balanced Asp.Net MVC application) for external access. More details about the situation: The Asp.Net MVC Application is currently running on 2 servers The 2 servers are behind a Windows Network Load Balancer All the servers are on premise/internal network I'm thinking of introducing an F5 Load balancer on off premise DMZ to replace the Windows Network Load Balancer. F5 will act as the public traffic gateway and load balancers to the 2 servers. However, I'd like the internal users to not have go through the Internet to access the app. The idea that I have so far is to keep both Windows Network Load Balancer and the F5. Each appliance will have its own IP and will have its own domain name. External users can use the public domain name which will hit F5, whereas internal users can use the internal domain name which will hit the Windows Network Load Balancer. Is this a good idea? Or is there a better way of doing this?

    Read the article

  • SQL Group by Minute- Expanded

    - by Barnie
    I am working on something similar to this post here: TS SQL - group by minute However mine is pulling from an message queue, and I need to see an accurate count of the amount of traffic the Message Queue is creating/ sending, and at what time Select * From MessageQueue mq My expanded version of this though is the following: A) User defines a start time and an end time (Easy enough using Declare's @StartTime and @EndTime B) Give the user the option of choosing the "grouping". Will it be broken out by 1 minutes, 5 minutes, 15 minutes, or 30 minutes (Max). (I had thought to do this with a CASE statement, but my test problems fall apart on me.) C) Display the data to accurately show a count of what happened during the interval (Grouping) selected. This is where I am at so far SQL Blob: DECLARE @StartTime datetime DECLARE @EndTime datetime SELECT DATEPART(n, mq.cre_date)/5 as Time --Trying to just sort by 5 minute intervals ,CONVERT(VARCHAR(10),mq.Cre_Date,101) ,COUNT(*) as results FROM dbo.MessageQueue mq WHERE mq.cre_date BETWEEN @StartDate AND @EndDate GROUP BY DATEPART(n, mq.cre_date)/5 --Trying to just sort by 5 minute intervals , eq.Cre_Date This is the output I would like to achieve: [Time] [Date] [Message Count] 1300 06/26/2012 5 1305 06/26/2012 1 1310 06/26/2012 100

    Read the article

  • Scalability 101: How can I design a scalable web application using PHP?

    - by Legend
    I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it? Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components: There is a load balancer that first receives requests and then decides where to route each request This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache A blackbox that handles database replication Specifically, I am trying to do the following: Setting up a load balancer (my homework revealed that HAProxy is one such load balancer) Setting up replication so that databases can be synchronized Using memcached Configuring Apache to work with multiple web servers Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage) Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally. I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?

    Read the article

  • C# slowdown while creating a bitmap - calculating distances from a large List of places for each pixel

    - by user576849
    I'm creating a graphic of the glow of lights above a geographic location based upon Walkers Law: Skyglow=0.01*Population*DistanceFromCenter^-2.5 I have a CSV file of places with 66,000 records using 5 fields (id,name,population,latitude,longitude), parsed on the FormLoad event and stored it in: List<string[]> placeDataList Then I set up nested loops to fill in a bitmap using SetPixel. For each pixel on the bitmap, which represents a coordinate on a map (latitude and longitude), the program loops through placeDataList – calculating the distance from that coordinate (pixel) to each place record. The distance (along with population) is used in a calculation to find how much cumulative sky glow is contributed to the coordinate from each place record. So, for every pixel, 66,000 distance calculations must be made. The problem is, this is predictably EXTREMELY slow – on the order of one line of pixels per 30 seconds or so on a 320 pixel wide image. This is unrelated to SetPixel, which I know is also slow, because the speed is similarly slow when adding the distance calculation results to an array. I don’t actually need to test all 66,000 records for every pixel, only the records within 150 miles (i.e. no skyglow is contributed to a coordinate from a small town 3000 miles away). But to find which records are within 150 miles of my coordinate I would still need to loop through all the records for each pixel. I can't use a smaller number of records because all 66,000 places contribute to skyglow for SOME coordinate in my map as it loops. This seems like a Catch-22, so I know there must be a better method out there. Like I mentioned, the slowdown is related to how many calculations I’m making per pixel, not anything to do with the bitmap. Any suggestions? private void fillPixels(int width) { Color pixelColor; int pixel_w = width; int pixel_h = (int)Math.Floor((width * 0.424088664)); Bitmap bmp = new Bitmap(pixel_w, pixel_h); for (int i = 0; i < pixel_h; i++) for (int j = 0; j < pixel_w; j++) { pixelColor = getPixelColor(i, j); bmp.SetPixel(j, i, pixelColor); } bmp.Save("Nightfall", System.Drawing.Imaging.ImageFormat.Jpeg); pictureBox1.Image = bmp; MessageBox.Show("Done"); } private Color getPixelColor(int height, int width) { int c; double glow,d,cityLat,cityLon,cityPop; double testLat, testLon; int size_h = (int)Math.Floor((size_w * 0.424088664)); ; testLat = (height * (24.443136 / size_h)) + 24.548874; testLon = (width * (57.636853 / size_w)) -124.640767; glow = 0; for (int i = 0; i < placeDataList.Count; i++) { cityPop=Convert.ToDouble(placeDataList[i][2]); cityLat=Convert.ToDouble(placeDataList[i][3]); cityLon=Convert.ToDouble(placeDataList[i][4]); d = distance(testLat, testLon, cityLat, cityLon,"M"); if(d<150) glow = glow+(0.01 * cityPop * Math.Pow(d, -2.5)); } if (glow >= 1) glow=1; c = (int)Math.Ceiling(glow * 255); return Color.FromArgb(c, c, c); }

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

  • GridView ObjectDataSource LINQ Paging and Sorting using multiple table query.

    - by user367426
    I am trying to create a pageing and sorting object data source that before execution returns all results, then sorts on these results before filtering and then using the take and skip methods with the aim of retrieving just a subset of results from the database (saving on database traffic). this is based on the following article: http://www.singingeels.com/Blogs/Nullable/2008/03/26/Dynamic_LINQ_OrderBy_using_String_Names.aspx Now I have managed to get this working even creating lambda expressions to reflect the sort expression returned from the grid even finding out the data type to sort for DateTime and Decimal. public static string GetReturnType<TInput>(string value) { var param = Expression.Parameter(typeof(TInput), "o"); Expression a = Expression.Property(param, "DisplayPriceType"); Expression b = Expression.Property(a, "Name"); Expression converted = Expression.Convert(Expression.Property(param, value), typeof(object)); Expression<Func<TInput, object>> mySortExpression = Expression.Lambda<Func<TInput, object>>(converted, param); UnaryExpression member = (UnaryExpression)mySortExpression.Body; return member.Operand.Type.FullName; } Now the problem I have is that many of the Queries return joined tables and I would like to sort on fields from the other tables. So when executing a query you can create a function that will assign the properties from other tables to properties created in the partial class. public static Account InitAccount(Account account) { account.CurrencyName = account.Currency.Name; account.PriceTypeName = account.DisplayPriceType.Name; return account; } So my question is, is there a way to assign the value from the joined table to the property of the current table partial class? i have tried using. from a in dc.Accounts where a.CompanyID == companyID && a.Archived == null select new { PriceTypeName = a.DisplayPriceType.Name}) but this seems to mess up my SortExpression. Any help on this would be much appreciated, I do understand that this is complex stuff.

    Read the article

  • How to use urlencoded urls with the GA tracking code

    - by Fake51
    I've got a site where a booking page has an iframe embedded with the actual booking form. I need to track traffic from the parent site to the child iframe. This should all work just fine with the normal GA code, using javascript like: <script type="text/javascript"> try { var pageTracker = _gat._getTracker("<UA CODE HERE>"); pageTracker._setDomainName("none"); pageTracker._setAllowLinker(true); pageTracker._setAllowHash(false); pageTracker._trackPageview(); } catch(err) {}</script> And then ofcourse using the _getLinkerUrl() function to get a url with the proper parameters. So far so good - this basically works (at least I know the principle works as I've got it working on other pages). However, and this is the problem: the server that serves up the page in the iframe was configured by a complete and utter moron (or, alternatively, created by a complete and utter moron). It chokes on '=' characters, so in order to request the iframe page I need to urlencode the '=' signs - but the GA code seems unable to parse the url when this is done. So the questions: 1. has anyone come across this? 2. does anyone know of any solutions to this problem?

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • Expose webservice directly to webclients or keep a thin server-side script layer in between?

    - by max
    Hi, I'm developing a REST webservice (Java, Jersey). The people I'm doing this for want to directly access the webservice via Javascript. Some instinct tells me this is not a good idea, but I cannot really explain that instinct. My natural approach would have been to have the webservice do the real logic and database access, but also have some (relatively thin) server-side script layer (e.g. in PHP). Clients would talk to the PHP layer which in turn would talk to the webservice. (The webservice would be pretty local to the apache/PHP server and implicitly trust calls from the script layer. The script layer would take care of session management.) (Btw, I am not talking about just hiding the webservice behind an Apache which simply redirects calls.) But as I find myself at a lack of words/arguments to explain my instinct, I wonder whether my instinct is right - note that while I have been developing all kinds of software in all kinds of languages and frameworks for like 17 years, this is the first time I develop a webservice. So my question is basically: what are your opinions? Are there any standard setups? Is my instinct totally wrong? Or partially? ;P Many thanks, Max PS: I might add a few bits of information about the planned usage of the whole application: will be accessed by different kinds of users, partly general public, partly privileged thus, all major OS/browser combinations can be expected as clients however, writing the client is not my responsibility will potentially have very high load/traffic logic of webservice will later be massively expanded for another product which is basically a superset of the functionality of the current project there is a significant likelihood that at some point an API should be exposed which can be used by 3rd party developers - obviously, with some restrictions at some point, the public view of the product should become accessible via smartphones, too (in other words, maybe a customized version of the site to adapt to the smaller display and different input methods)

    Read the article

  • Why does PDO print my password when the connection fails?

    - by Joe Hopfgartner
    I have a simple website where I establish a connection to a Mysql server using PDO. $dbh = new PDO('mysql:host=localhost;dbname=DB;port=3306', 'USER', 'SECRET',array(PDO::MYSQL_ATTR_INIT_COMMAND => "SET NAMES utf8")); I had some traffic on my site and the servers connection limit was reached, and the website throw this error, with my PLAIN password in it! Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[08004] [1040] Too many connections' in /home/premiumize-me/html/index.php:64 Stack trace: #0 /home/premiumize-me/html/index.php(64): PDO-__construct('mysql:host=loca...', 'USER', 'SECRET', Array) #1 {main} thrown in /home/premiumize-me/html/index.php on line 64 Ironically I switched to PDO for security reasons, this really shocked me. Because this exact error is something you can provoke very easily on most sites using simple http flooding. I now wrapped my conenction into a try/catch clause, but still. I think this is catastrophic! So I am new to PDO and my questino is: What do I have to consider to be safe! How to I establish a connection in a secure way? Are there other known security holes like this one that I have to be aware of?

    Read the article

  • Choosing approach for an IM client-server app

    - by John
    Update: totally re-wrote this to be more succint. I'm looking at a new application, one part of which will be very similar to standard IM clients, i.e text chat, ability to send attachments, maybe some real-time interaction like a multi-user whiteboard. It will be client-server, i.e all traffic goes through my central server. That means if I want to support cross-communication with other IM systems, I am still free to pick any protocol for my own client<--server communication - my server can use XMPP or whatever to talk to other systems. Clients are expected to include desktop apps, but probably also browser-based as well either through Flex/Silverlight or HTML/AJAX. I see 3 options for my own client-server communication layer: XMPP. The benefits are clients already exist as do open-source servers. However it requires the most up-front research/learning and also appears like it might raise legal issues due to GPL. Custom sockets. A server app makes connections with the clients, allowing any text/binary data to be sent very fast. However this approach requires building said server from scratch, and also makes a JS client tricky Servlets (or similar web server). Using tried and tested Java web-stack, clients send HTTP requests similar to AJAX-based websites. The benefit is the server is easy to write using well-established technologies, and easy to talk to. But what restrictions would this bring? Is it appropriate technology for real-time communication? Advice and suggests are welcome, especially what pros and cons surround using a web-server approach as compared to a socket-based approach.

    Read the article

  • How can I get a list of modified records from a SQL Server database?

    - by Pixelfish
    I am currently in the process of revamping my company's management system to run a little more lean in terms of network traffic. Right now I'm trying to figure out an effective way to query only the records that have been modified (by any user) since the last time I asked. When the application starts it loads the job information and caches it locally like the following: SELECT * FROM jobs. I am writing out the date/time a record was modified ala UPDATE jobs SET Widgets=@Widgets, LastModified=GetDate() WHERE JobID=@JobID. When any user requests the list of jobs I query all records that have been modified since the last time I requested the list like the following: SELECT * FROM jobs WHERE LastModified>=@LastRequested and store the date/time of the request to pass in as @LastRequest when the user asks again. In theory this will return only the records that have been modified since the last request. The issue I'm running into is when the user's date/time is not quite in sync with the server's date/time and also of server load when querying an un-indexed date/time column. Is there a more effective system then querying date/time information?

    Read the article

  • [PHP] MySql Proccesslist filled with "Sleep" Entries leading to "To many Connections" ?

    - by edorian
    Hi, i'd like to ask your help on a longstanding issue with php/mysql connections. Every time i execute a "SHOW PROCESSLIST" command it shows me about 400 idle (Status: Sleep) connections to the database Server emerging from our 5 Webservers. That never was much of a problem (and i didn't find a quick solution) until recently traffic numbers increased and since then MySql reports the "to many connections" Problems repeatedly, even so 350+ of those connections are in "sleep" state. Also a server can't get a mysql connection even if there are sleeping connection to that same server. All those connections vanish when a apache server is restated. The PHP Code used to create the Database connections uses the normal "mysql" Module, the "mysqli" Module, PEAR::DB and Zend Framework Db Adapter. (Different projects). NONE of the projects uses persistent connections. Raising the connection-limit is possible but doesn't seem like a good solution since it's 450 now and there are only 20-100 "real" connections at a time anyways. My question: Why are there so many connections in sleep state and how can i prevent that. Thank you for your time, if theres anything unclear or missing please let me know

    Read the article

  • Add a banner at the top on unknown HTML page on the fly; Send a note to network user

    - by Vi
    I want to be able to send messages to users of network that my computer routes to. Modifying requested pages, by adding messages like Warning: this access point is going down soon" or Network quota exhausted or You are overusing the network connection and may be banned or Security notice: this is unencrypted network, so everybody around can see this page too It seems the only good solution (as there is no general "Send notice to users of my access point" thing). All traffic is already routed through -j REDIRECT, socat and remote SOCSK5 server, so intercepting unencrypted content should be easy. How it's better to add a message? Just inserting <div style="color:white; background-color:red">Sorry, connect_me access point is going down soon...</div> as the first element of <body> breaks design of some pages (f.e. Wikipedia). Should I also add some "Cache-control" directive to prevent persisting such messages? /* smdclient messages are poor solution, as them are Windows-only and can disturb users not using the network */

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • What kind of online hosting do I need for a WCF-based service?

    - by mafutrct
    First of all, I'm not sure if SO is the right place to ask. Please migrate me if needed. I would like to host a WCF-based service so it is available for everyone. While hosting it on my personal, local servers succeeded, I would prefer to move it to an external service provider for various reasons. I'll be blunt: I have no clue about hosting providers. I know there are webhosters, virtual and root servers and several other services. What I would like to know is what kind of hosting I need in my case. I understand that a root server would easily fulfill my requirements, but that is not exactly cheap. The program I'd like to run on the server requires .NET 4, preferably on a windows machine. Access to a folder in the file system is much appreciated (1 GB storage is enough by far). Communication with clients (in form of an applications written in .NET) via opening a port on the server. Traffic is low (<<1 GB/month?) There is no website. Having the provider perform updates would be nice. My understanding is that a virtual server would be a possible solution. Prices seem start at around 5€/month, which is ok for me. However, I read that for these cheap solutions RAM is severely limited (~400 MB), and I'm not confident that is enough to run windows and a .NET application.

    Read the article

  • How can I measure my (SAMP) server's bandwidth usage?

    - by enkrates
    I'm running a Solaris server to serve PHP through Apache. What tools can I use to measure the bandwidth my server is currently using? I use Google analytics to measure traffic, but as far as I know, it ignores file size. I have a rough idea of the average size of the pages I serve, and can do a back-of-the-envelope calculation of my bandwidth usage by multiplying page views (from Google) by average page size, but I'm looking for a solution that is more rigorous and exact. Also, I'm not trying to throttle anything, or implement usage caps or anything like that. I'd just like to measure the bandwidth usage, so I know what it is. An example of what I'm after is the usage meter that Slicehost provides in their admin website for their users. They tell me (for another site I run) how much bandwidth I've used each month and also divide the usage for uploading and downloading. So, it seems like this data can be measured, and I'd like to be able to do it myself. To put it simply, what is the conventional method for measuring the bandwidth usage of my server?

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • SCVMM 2012 R2 - Installing Virtual Switch Fails with Error 2916

    - by Brian M.
    So I've been attempting to teach myself SCVMM 2012 and Hyper-V Server 2012 R2, and I seem to have hit a snag. I've connected my Hyper-V Host to SCVMM 2012 successfully, and created a logical network, logical switch, and uplink port profile (which I essentially blew through with the default settings). However when I attempt to create a virtual switch on my Hyper-V host, I run into an issue. The job will use my logical network settings I created to configure the virtual switch, but when it tries to apply it to the host, it stalls and eventually fails with the following error: Error (2916) VMM is unable to complete the request. The connection to the agent vmhost1.test.loc was lost. WinRM: URL: [h**p://vmhost1.test.loc:5985], Verb: [GET], Resource: [h**p://schemas.microsoft.com/wbem/wsman/1/wmi/root/virtualization/v2/Msvm_ConcreteJob?InstanceID=2F401A71-14A2-4636-9B3E-10C0EE942D33] Unknown error (0x80338126) Recommended Action Ensure that the Windows Remote Management (WinRM) service and the VMM agent are installed and running and that a firewall is not blocking HTTP/HTTPS traffic. Ensure that VMM server is able to communicate with econ-hyperv2.econ.loc over WinRM by successfully running the following command: winrm id –r:vmhost1.test.loc This problem can also be caused by a Windows Management Instrumentation (WMI) service crash. If the server is running Windows Server 2008 R2, ensure that KB 982293 (h**p://support.microsoft.com/kb/982293) is installed on it. If the error persists, restart vmhost1.test.loc and then try the operation again. Refer to h**p://support.microsoft.com/kb/2742275 for more details. I restarted the server, and upon booting am greeted with a message stating "No active network adapters found." I load up powershell and run "Get-NetAdapter -IncludeHidden" to see what's going on, and get the following: Name InterfaceDescription ifIndex Status ---- -------------------- ------- ----- Local Area Connection* 5 WAN Miniport (PPPOE) 6 Di... Ethernet Microsoft Hyper-V Network Switch Def... 10 Local Area Connection* 1 WAN Miniport (L2TP) 2 Di... Local Area Connection* 8 WAN Miniport (Network Monitor) 9 Up Local Area Connection* 4 WAN Miniport (PPTP) 5 Di... Ethernet 2 Broadcom NetXtreme Gigabit Ethernet 13 Up Local Area Connection* 7 WAN Miniport (IPv6) 8 Up Local Area Connection* 9 Microsoft Kernel Debug Network Adapter 11 No... Local Area Connection* 3 WAN Miniport (IKEv2) 4 Di... Local Area Connection* 2 WAN Miniport (SSTP) 3 Di... vSwitch (TEST Test Swi... Hyper-V Virtual Switch Extension Ada... 17 Up Local Area Connection* 6 WAN Miniport (IP) 7 Up Now the machine is no longer visible on the network, and I don't have the slightest idea what went wrong, and more importantly how to undo the damage I caused in order to get back to where I was (save for re-installing Hyper-V Server, but I really would rather know what's going on and how to fix it)! Does anybody have any ideas? Much appreciated!

    Read the article

  • System Account Logon Failures ever 30 seconds

    - by floyd
    We have two Windows 2008 R2 SP1 servers running in a SQL failover cluster. On one of them we are getting the following events in the security log every 30 seconds. The parts that are blank are actually blank. Has anyone seen similar issues, or assist in tracking down the cause of these events? No other event logs show anything relevant that I can tell. Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 10/17/2012 10:02:04 PM Event ID: 4625 Task Category: Logon Level: Information Keywords: Audit Failure User: N/A Computer: SERVERNAME.domainname.local Description: An account failed to log on. Subject: Security ID: SYSTEM Account Name: SERVERNAME$ Account Domain: DOMAINNAME Logon ID: 0x3e7 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Account Domain: Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x238 Caller Process Name: C:\Windows\System32\lsass.exe Network Information: Workstation Name: SERVERNAME Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Schannel Authentication Package: Kerberos Transited Services: - Package Name (NTLM only): - Key Length: 0 Second event which follows every one of the above events Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 10/17/2012 10:02:04 PM Event ID: 4625 Task Category: Logon Level: Information Keywords: Audit Failure User: N/A Computer: SERVERNAME.domainname.local Description: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Account Domain: Failure Information: Failure Reason: An Error occured during Logon. Status: 0xc000006d Sub Status: 0x80090325 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: - Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Schannel Authentication Package: Microsoft Unified Security Protocol Provider Transited Services: - Package Name (NTLM only): - Key Length: 0 EDIT UPDATE: I have a bit more information to add. I installed Network Monitor on this machine and did a filter for Kerberos traffic and found the following which corresponds to the timestamps in the security audit log. A Kerberos AS_Request Cname: CN=SQLInstanceName Realm:domain.local Sname krbtgt/domain.local Reply from DC: KRB_ERROR: KDC_ERR_C_PRINCIPAL_UNKOWN I then checked the security audit logs of the DC which responded and found the following: A Kerberos authentication ticket (TGT) was requested. Account Information: Account Name: X509N:<S>CN=SQLInstanceName Supplied Realm Name: domain.local User ID: NULL SID Service Information: Service Name: krbtgt/domain.local Service ID: NULL SID Network Information: Client Address: ::ffff:10.240.42.101 Client Port: 58207 Additional Information: Ticket Options: 0x40810010 Result Code: 0x6 Ticket Encryption Type: 0xffffffff Pre-Authentication Type: - Certificate Information: Certificate Issuer Name: Certificate Serial Number: Certificate Thumbprint: So appears to be related to a certificate installed on the SQL machine, still dont have any clue why or whats wrong with said certificate. It's not expired etc.

    Read the article

  • Hyper-V + RRAS NAT + Port Forwarding + RDP, can I get it all working together?

    - by Tom Bull
    I am running a Windows 2008 R2 server with various services running natively and two virtualised servers running on Hyper-V. The hardware server, I'm going to call it REAL1, has one external NIC, to which I can assign any of the following IP addresses: 1.2.3.4, 1.2.3.5, 1.2.3.6, etc... I need to achieve the following: I would like to be able to connect to REAL1 via remote desktop (RDP / port 3389) on one IP address (say 1.2.3.4), but also to the virtualised servers (I'm going to call them VIRTUAL1 and VIRTUAL2) on the other available IP addresses (say 1.2.3.5 and 1.2.3.6). The easiest way of doing this is to connect the virtual servers directly to the external interface and assign them each their own IP address. REAL1 will have 1.2.3.4, VIRTUAL1 will have 1.2.3.5 and VIRTUAL2 will have 1.2.3.6. Unfortunately, although I don't directly manage the two virtual servers, I have responsibility for their security. I would like to have some kind of firewall between the virtual servers an the internet. I have tried running a virtual machine firewall, but have found the performance on Hyper-V pretty terrible. The alternative I am now trying is Routing and Remote Access (RRAS): I have set up a virtual network called 'Internal' and REAL1 has a virtual network adapter connected to this virtual network I have connected each of the virtual servers to this network too I have assigned each server static IP addresses on this virtual network (REAL1 has 10.1.1.1, VIRTUAL1 has 10.1.1.2 and VIRTUAL2 has 10.1.1.3) I have installed RRAS and set up a NAT. The external interface is the external NIC, the internal interface is the virtual NIC connected to the internal network I have assigned all the available external IP addresses to the external NIC on REAL1. The virtual servers have been set up appropriately such that their default gateway is pointing to 10.1.1.1 and they can both access externally. Success! The RRAS is routing packets. The problem I have is that when I try to port forward services from the external IP address on REAL1, it only works if there is not already a service bound to the port. Remote desktop 'greedily' binds to every available IP address on port 3389 on REAL1 so I can't selectively forward incoming traffic for 1.2.3.5:3389 to 10.1.1.2:3389. RRAS will allow me to set up this port forwarding, and no errors come up. It just doesn't work. So the question I have is: Is there a better way of doing this? Or at least is there a way of resolving the apparant conflict between RRAS and everything else on the physical server?

    Read the article

  • IIS 7.5 FTPS external access - 534 Policy requires SSL

    - by markmnl
    I have setup a FTP site that requires SSL but when I try connect to it externally I get the error: 220 Microsoft FTP Service 534 Policy requires SSL. I know - I set it so! Why doesnt it fetch the SSL cert from the site and allow me to logon?! (Incidentally beware of all the tutorials that Allow but do not Require SSL - while that will solve the problem it will be because SSL is not being used!). I suspect it may be I need a client that supports FTPS (FTP over SSL) and Windows explorer just uses IE which does not. But trying FileZilla and WinSCP I get a little further but then it hangs on TLS/SSL negotiation expecting a response from the server.... UPDATE: I have tried (from: http://learn.iis.net/page.aspx/309/configuring-ftp-firewall-settings/): Configure the Passive Port Range for the FTP Service. Configure the external IPv4 Address for a Specific FTP Site. Configure the firewall to allow the FTP service to listen on all ports that it opens. Disabling stateful FTP filtering so that Windows Firewall will not block FTP traffic. And still I get (in FileZilla trying both Active and Passive): Status: Connecting to 203.x.x.x:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: AUTH TLS Response: 234 AUTH command ok. Expecting TLS Negotiation. Status: Initializing TLS... Error: Connection timed out Error: Could not connect to server The Windows firewall logs unhelpfully have nothing to say.. UPDATE2: Turning the firewall off does not resolve the problem. I cannot believe how difficult it is to get something so simple to work and even once following the documentation it does not work. UPDATE3: Running FileZilla locally connecting through the loopback works in Active mode, in Passive mode I get up to: Command: LIST Response: 150 Opening BINARY mode data connection. Error: GnuTLS error -53: Error in the push function. Turning the firewall off at both ends I can still not connect the client and get the same error as above.

    Read the article

  • Apache2 - mod_expire and mod_rewrite not working in httpd.conf - serving content from tomcat

    - by Ankit Agrawal
    Hi, I am using apache2 server running on debian which forwards all the http request to tomcat installed on same machine. I have two files under my /etc/apache2/ folder apache2.conf and httpd.conf I modified httpd.conf file to look like following. # forward all http request on port 80 to tomcat ProxyPass / ajp://127.0.0.1:8009/ ProxyPassReverse / ajp://127.0.0.1:8009/ # gzip text content AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE text/javascript AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript DeflateCompressionLevel 9 BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # Turn on Expires and mark all static content to expire in a week # unset last modified and ETag ExpiresActive On ExpiresDefault A0 <FilesMatch "\.(jpg|jpeg|png|gif|js|css|ico)$" ExpiresDefault A604800 Header unset Last-Modified Header unset ETag FileETag None Header append Cache-Control "max-age=604800, public" </FilesMatch RewriteEngine On # rewrite all www.example.com/content/XXX-01.js and YYY-01.css files to XXX.js and YYY.css RewriteRule ^content/(js|css)/([a-z]+)-([0-9]+)\.(js|css)$ /content/$1/$2.$4 # remove all query parameters from URL after we are done with it RewriteCond %{THE_REQUEST} ^GET\ /.*\;.*\ HTTP/ RewriteCond %{QUERY_STRING} !^$ RewriteRule .* http://example.com%{REQUEST_URI}? [R=301,L] # rewrite all www.example.com to example.com RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] I want to achieve following. forward all traffic to tomcat GZIP all the text content. Put 1 week expiry header to all static files and unset ETag/last modified header. rewrite all js and css file to certain format. remove all the query parameters from URL forward all www.example.com to example.com The problem is only 1 and 2 are working. I tried a lot with many combinations but the expire and rewrite rule (3-6) do not work at all. I also tried moving these rules to apache2.conf and .htaccess files but it didn't work either. It does not give any error but these rules are simple ignored. expires and rewrite modules are ENABLED. Please let me know what should I do to fix this. 1. Do I need to add something else in httpd.conf file (like Options +FollowSymLink) or something else? 2. Do I need to add something in apache2.conf file? 3. Do I need to move these rules to .htaccess file? If yes, what should I write in that file and where should I keep that file? in /etc/apache2/ folder or /var/www/ folder? 4. Any other info to make this work? Thanks, Ankit

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >