Search Results

Search found 4618 results on 185 pages for 'websites'.

Page 165/185 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Common vulnerabilities for WinForms applications

    - by David Stratton
    I'm not sure if this is on-topic or not here, but it's so specific to .NET WinForms that I believe it makes more sense here than at the Security stackexchange site. (Also, it's related strictly to secure coding, and I think it's as on-topic as any question asking about common website vulnerabiitles that I see all over the site.) For years, our team has been doing threat modeling on Website projects. Part of our template includes the OWASP Top 10 plus other well-known vulnerabilities, so that when we're doing threat modeling, we always make sure that we have a documented process to addressing each of those common vulnerabilities. Example: SQL Injection (Owasp A-1) Standard Practice Use Stored Parameterized Procedures where feasible for access to data where possible Use Parameterized Queries if Stored Procedures are not feasible. (Using a 3rd party DB that we can't modify) Escape single quotes only when the above options are not feasible Database permissions must be designed with least-privilege principle By default, users/groups have no access While developing, document the access needed to each object (Table/View/Stored Procedure) and the business need for access. [snip] At any rate, we used the OWASP Top 10 as the starting point for commonly known vulnerabilities specific to websites. (Finally to the question) On rare occasions, we develop WinForms or Windows Service applications when a web app doesn't meet the needs. I'm wondering if there is an equivalent list of commonly known security vulnerabilities for WinForms apps. Off the top of my head, I can think of a few.... SQL Injection is still a concern Buffer Overflow is normally prevented by the CLR, but is more possible if using non-managed code mixed in with managed code .NET code can be decompiled, so storing sensitive info in code, as opposed to encrypted in the app.config... Is there such a list, or even several versions of such a list, from which we can borrow to create our own? If so, where can I find it? I haven't been able to find it, but if there is one, it would be a great help to us, and also other WinForms developers.

    Read the article

  • Specifying the bounce-back address for email

    - by Kirk Broadhurst
    I'm having a problem getting emails to bounce to a specific email address, different to the From address. A particular client requires that we send emails from a specific email address (call it [email protected]). Our Exchange admins have created an account on the Exchange box so that we can log in and send from that address. Our Exchange server is spoofing that address / domain. This works fine. Unfortunately the emails sent from [email protected] are not bouncing back to us. They are presumably bouncing back to the contact account at clientcompany.com (which may or may not exist). I've inserted a header [email protected] with the assumption that this field determines where bouncebacks are sent. Other documents indicate that this field should never be populated by the originating SMTP system. Other websites again talk about a field called Errors-To which is apparently non-standard. So - which field is the correct one, and what does it depend on? Any ideas why my Return-Path is not working? I'd really like to get Exchange to correctly bounce a message addressed to an invalid server! update: Continuing to dig, and my Return-Path work was only adding an extended property at the end of the header block, but Exchange appears to be still adding its own Return-Path value at the top. Delivered-To: [email protected] Received: by 1.1.1.1 with SMTP ... Return-Path: <[email protected]> Received: from ... ... ... Subject: Test Message-ID: ... Return-Path: [email protected] According to the Microsoft.com, I cannot set the Return-Path as it is determined by the MAIL FROM - which seems consistent with what I've previously read. But now I'm stuck - how do I change this MAIL FROM value programmatically within Exchange 2007?

    Read the article

  • Grep failing with Emacs (windows), and GnuWin32 Grep

    - by Andy
    Hi, I've downloaded and installed the GnuWin32 tools, and added the grep executables to the Emacs bin. I've also, for what its worth, added the GnuWin32 bin folder to my Path variable. Problem is though, when I try and run with suggested grep commands, I always get: Grep exited abnormally with code 53 at Wed Feb 24 17:16:12 For the life of me, I can't find any reference to error code 53 anywhere! :( I've tried the exact examples on a number of websites for example, when I enter: M-x grep <ret> It comes up with Run grep (like this): grep -n Which is fine, but I have no idea of what parameters it expects. I've tried some in some tuorials, but I get error code 53 again! One of the things I've tried is straight from the emacs wiki (http://www.emacswiki.org/emacs/GrepMode#toc2) (maybe not for the windows version though?) and it says to try this command: M-x grep -n -e setq ~/.emacs Which I've tried and I get: -*- mode: grep; default-directory: "c:/[My Directory]/" -*- Grep started at Wed Feb 24 17:30:47 grep -n -e setq ~/.emacs NUL Grep exited abnormally with code 53 at Wed Feb 24 17:30:47 So frustrating as this is meant to be a powerful feature of Emacs and I'm really trying to learn it as I've heard good things about it! Any help would be appreciated! :) Andy UPDATE From the suggestion below, I've tried it via command line and it seems to work fine, perhaps there some config I'm UPDATE I've found the command M-x Occur which seems to do much the same as I would image grep does. Are there many extra benefits to using grep over occur if I can't get this working?

    Read the article

  • Creating a Sharepoint Development Environment from an Existing Production Environment

    - by Starky
    I have very little experience using Sharepoint but a good amount using Visual Studio 2008, SQL Server 2005, Windows Server 2003 and IIS6. I need to create a development environment for a SharePoint 2007 system that will be used internally. The system is already deployed over two servers - one of the servers simply holds the database and everything else is on the other server. We are also using WSS 3.0. I have created a Virtual Machine with all the required software including a clean installation of SharePoint Server 2007 and I wish to use this single Virtual Machine as the development environment. Right now there are no custom assemblies being used on the production server as far as I am aware. There are 3 websites, one over port 80 for user accesss, one over a custom port for central administration, and one over another custom port. Not sure what the last one is for but my blank instance of Sharepoint on my Virtual Machine also has something similar. I attempted to use the STSADM tool to backup and restore these 3 sites from my production environment to my development environment and while the operations completed succesfully, the central administration site in my development environment acted strangely and I could not access port 80 - I did not seem to have correct credentials for it. I suspected that it would not have been so simple so could I please have advice on how to create my development environment so that I can use it to deploy updates to the production one.

    Read the article

  • Is this an acceptable UI design decision?

    - by DVK
    OK, while I'm on record as stating that StackExchange UI is pretty much one of the best websites and overall GUIs that I have ever seen as far as usability goes, there's one particular aspect of the trilogy that bugs me. For an example, head on to http://meta.stackoverflow.com . Look at the banner on top (the one that says "reminder -- it's April Fool's Day depending on your time zone!"). Personally, I feel that this is a "make the user do the figuring out work" anti-pattern (whatever it's officially called) - namely, instead of making your app smart enough to only present a certain mode of operations in the conditions when that mode is appropriate, you simply turn on the mode full on and put an explanation to the user of why the mode is on when it should not be (in this particular example, the mode is of course displaying the unicorn gravatars starting with 00:00 in the first timezone, despite the fact that some users still live in March 31st). The Great Recalc was also handled the same way - instead of proactively telling the user "your rep was changed from X to Y" the same nearly invisible banner was displayed on meta. So, the questions are: Is there such an official anti-pattern, and if so,m what the heck do i call it? Do you have any other well-known examples of such design anti-pattern? How would you fix either the SO example I made or you your own example? Is there a pattern of fixing or must it be a case-by-case solution?

    Read the article

  • ASP .NET, Javascript, AjaxControlToolkit - get results with Selenium??

    - by Seth
    I'm a newbie to web stuff. However, I wish to scrape some data from multiple websites. I'm currently using the following technologies: Selenium; Python; and BeautifulSoup; I believe the site I am trying to scrape is using a combination of ASP.NET, javascript and the AjaxControlToolkit. I believe the key results I am looking for are in the following script: <script type="text/javascript"> //<![CDATA[ Sys.Application.initialize(); Sys.Application.add_init(function() { $create(AjaxControlToolkit.AutoCompleteBehavior, {"completionInterval":50,"completionListCssClass":"autocomplete_completionListElement","completionListItemCssClass":"autocomplete_listItem","completionSetCount":20,"delimiterCharacters":"","highlightedItemCssClass":"autocomplete_highlightedListItem","id":"ctl00_ContentPlaceHolder1_AutoCompleteExtender1","minimumPrefixLength":4,"serviceMethod":"GetSchoolNames","servicePath":"AutoComplete.asmx"}, {"itemSelected":ItemSelected}, null, $get("ctl00_ContentPlaceHolder1_SchoolNameTextBox")); }); Sys.Application.add_init(function() { $create(AjaxControlToolkit.AutoCompleteBehavior, {"completionInterval":50,"completionListCssClass":"autocomplete_completionListElement","completionListItemCssClass":"autocomplete_listItem","delimiterCharacters":"","highlightedItemCssClass":"autocomplete_highlightedListItem","id":"ctl00_ContentPlaceHolder1_AutoCompleteExtender2","minimumPrefixLength":2,"serviceMethod":"GetSuburbNames","servicePath":"AutoComplete.asmx"}, null, null, $get("ctl00_ContentPlaceHolder1_SuburbTownTextBox")); }); //]]> </script> Is there an easy way to get the results of the above script processed using Selenium so that I may pass it using BeautifulSoup?

    Read the article

  • Is Rails Metal (& Rack) a good way to implement a high traffic web service api?

    - by Greg
    I am working on a very typical web application. The main component of the user experience is a widget that a site owner would install on their front page. Every time their front page loads, the widget talks to our server and displays some of the data that returns. So there are two components to this web application: the front end UI that the site owner uses to configure their widget the back end component that responds to the widget's web api call Previously we had all of this running in PHP. Now we are experimenting with Rails, which is fantastic for #1 (the front end UI). The question is how to do #2, the back serving of widget information, efficiently. Obviously this is much higher load than the front end, since it is called every time the front page loads on one of our clients' websites. I can see two obvious approaches: A. Parallel Stack: Set up a parallel stack that uses something other than rails (e.g. our old PHP-based approach) but accesses the same database as the front end B. Rails Metal: Use Rails Metal/Rack to bypass the Rails routing mechanism, but keep the api call responder within the Rails app My main question: Is Rails/Metal a reasonable approach for something like this? But also... Will the overhead of loading the Rails environment still be too heavy? Is there a way to get even closer to the metal with Rails, bypassing most of the environment? Will Rails/Metal performance approach the perf of a similar task on straight PHP (just looking for ballpark here)? And... Is there a 'C' option that would be much better than both A and B? That is, something before going to the lengths of C code compiled to binary and installed as an nginx or apache module? Thanks in advance for any insights.

    Read the article

  • which language to use for building web application?

    - by harshit
    Hi I already have experience in developing websites using java technologies ... Now i have a task to develop another website and i have the liberty to select technology to built. I dont want to built using Java/J2ee standard technology as i want to learn new language. The specification about website i can give is that: 1) its a real estate based site. 2) so it will have a db of real estate data around million records 3) website will have more than 1000 hits /day and will have various functionality like search, add , delete,generate reports etc. So i mean UI should be good and fast. Technologies i have in mind .NEt( i have already worked on it but it licensed so may not go for it) , Groovy, Ruby on rails ,Play, GWT etc ... I am a college student and the website is again of a student(non techie guy) so i have 5-6 mnths to bring the website up I have read about them but all have adv and disadv but would like to hear from people who have used it and can tell me what they felt about the languages and problems while developing it.. Please feel free to drop any opinion you feel . Thanks

    Read the article

  • Complete list of tools and technologies that make up a solid ASP.NET MVC 2 development environment f

    - by Dr Dork
    This question is related to another wiki I found on SO, but I'd like to develop a more comprehensive example of an automated ASP MVC 2 development environment that can be used to develop and deploy a wide range of small-scale websites by beginners. As far as characteristics of the dev environment go, I'd like to focus on beginner-friendly over powerful since the other wiki focuses more on advanced, powerful setups. This information is targeted for beginners (that already know C# and understand web dev concepts) that have selected... ASP.NET MVC 2 as their dev framework Visual Studio 2010 Pro (or 2008 Pro SP1) as their IDE Windows 7 as their OS and are looking for a quick and easy-to-setup environment that covers managing, building, testing, tracking, and deploying their website with as much automation as possible. A system that can be used for becoming familiar with the whole process, as well as a launching point for exploring other, more custom and powerful systems. Since we've already selected the Compiler, Framework, and OS, I'd like to develop ideas for... Code editor (unless you feel VS will suffice for all areas of code) Database and related tools Unit testing (VS?) Continuous integration build system (VS?) Project Planning Issue tracking Deployment (VS?) Source management (VS?) ASP, C#, VS, and related blogs that beginners can follow Any other categories I'm probably missing Since we're already using Visual Studio, I'd like to focus on the out-of-the-box solutions and features built into Visual Studio, unless you feel there are better solutions that work well with VS and are easier to use than the features built directly into VS. Thanks so much in advance for your wisdom!

    Read the article

  • Good examples of MapServer / OpenLayers

    - by MarkJ
    I want to convince some clients to use MapServer and OpenLayers. Please can anyone suggest attractive websites to show off the possiblities! The clients will be impressed by: A density map (otherwise known as a heat map, colour-shaded grid coverage, contour plot...). The ability for the user to download the underlying data for the density map, restricted to the area being viewed, in some format such as netCDF. Standard OpenLayers stuff. Zooming, panning, scale bar, overview map... Different base layers. Could be WMS, Google, Bing... Searching for a placename, map is panned to display the place. MapServer.org seems to be down right now :( But from memory their examples didn't have the "wow" factor. The OpenLayers examples demonstrate only one or two features per example - I want something to wow the clients by showing all the capabilities in one example. PS If you have good examples that use some other open source tools, post them by all means. But just JavaScript please: customer says no rich client.

    Read the article

  • Troubleshoot Page Loading Issue (ActiveX control)

    - by Spence
    Apologies for cryptic title, I'm hoping most people will read this BEFORE trying to move it to server fault. I might try and discourage the "too hard basket, move it to server fault anyway" crowd as well :). I have a page which will open cleanly when it's opened in it's own browser window/tab. The exact same URL, when loaded as part of a frame in a page in the same (local intranet) security zone shows the error: Your security settins do not allow websites to use ActiveX controls installed on your computer. This page may not display correctly. Click Here for options. How do I troubleshoot IE to work out which ActiveX control won't load in the frame but does load if the page is opened directly? Please note that I have dropped the security fully AND I've also moved EVERY setting in IE to enable or allow and restarted the browser but I STILL get the same issue. The two sites are on different machines in the same domain. I can't access website source code but I can ask for it to be fixed, so I'd really appreciate some help on this.

    Read the article

  • Web crawler update strategy

    - by superb
    I want to crawl useful resource (like background picture .. ) from certain websites. It is not a hard job, especially with the help of some wonderful projects like scrapy. The problem here is I not only just want crawl this site ONE TIME. I also want to keep my crawl long running and crawl the updated resource. So I want to know is there any good strategy for a web crawler to get updated pages? Here's a coarse algorithm I've thought of. I divided the crawl process into rounds. Each round URL repository will give crawler a certain number (like , 10000) of URLs to crawl. And then next round. The detailed steps are: crawler add start URLs to URL repository crawler ask URL repository for at most N URL to crawl crawler fetch the URLs, and update certain information in URL repository, like the page content, the fetch time and whether the content has been changed. just go back to step 2 To further specify that, I still need to solve following question: How to decide the "refresh-ness" of a web page, which indicates the probability that this web page has been updated ? Since that is an open question, hopefully it will brought some fruitful discussion here.

    Read the article

  • Focussing on Style Sheets and Cross Browser Compatibility.

    - by Sam
    Hello everyone, Let me begin this topic by explaining my background experience with web design. I have always been more of a back end programmer, with PHP and SQL and things. However I do have a shallow background with HTML and CSS. The problem is, I don't know it all. What I do know is, when it comes to designing (not back end dirty work) I understand basic CSS properties and I also understand HTML and I can usually throw together a sloppy web page with the two and a couple bazillion DIV tags. Anyways.. The problem I always have encountered is that when I design a website in a browser such as IE7 (and then it looks perfect on IE7), and then look at it on IE8 or IE6 or Mozilla (etc.) it gets all spacey and ugly and looks totally different than the way it should look on IE7. Question one: Basically, what I am asking everyone is what route should I take to learn how to properly build the website? Build as in put it togehter with CSS standards and HTML standards that will make my site look the same on every brwoser. (Not only learning standards but where can I learn to properly write my code?) Where is a strong free resource I can use to learn how to these things? Question two: How do I properly code my website? Do I use all external style sheets to make dynamic page design simplistic or do I hard code some things into the DIV tags on each page? What is proper? Oh, and if anyone has any tutorials on how to properly design a complete layout feel free to throw it in a response somewhere. Thank you for taking the time to read my questions, and hopefully you will understand what I am trying to get out to everyone. I need to get on the right route of the designing side of web programming so that I will know how to create successful websites in the future. Thank you, Sam Pardee

    Read the article

  • Scrolling a Canvas smoothly in Android

    - by prepbgg
    I'm new to Android. I am drawing bitmaps, lines and shapes onto a Canvas inside the OnDraw(Canvas canvas) method of my view. I am looking for help on how to implement smooth scrolling in response to a drag by the user. I have searched but not found any tutorials to help me with this. The reference for Canvas seems to say that if a Canvas is constructed from a Bitmap (called bmpBuffer, say) then anything drawn on the Canvas is also drawn on bmpBuffer. Would it be possible to use bmpBuffer to implement a scroll ... perhaps copy it back to the Canvas shifted by a few pixels at a time? But if I use Canvas.drawBitmap to draw bmpBuffer back to Canvas shifted by a few pixels, won't bmpBuffer be corrupted? Perhaps, therefore, I should copy bmpBuffer to bmpBuffer2 then draw bmpBuffer2 back to the Canvas. A more straightforward approach would be to draw the lines, shapes, etc. straight into a buffer Bitmap then draw that buffer (with a shift) onto the Canvas but so far as I can see the various methods: drawLine(), drawShape() and so on are not available for drawing to a Bitmap ... only to a Canvas. Could I have 2 Canvases? One of which would be constructed from the buffer bitmap and used simply for plotting the lines, shapes, etc. and then the buffer bitmap would be drawn onto the other Canvas for display in the View? I should welcome any advice! Answers to similar questions here (and on other websites) refer to "blitting". I understand the concept but can't find anything about "blit" or "bitblt" in the Android documentation. Are Canvas.drawBitmap and Bitmap.Copy Android's equivalents?

    Read the article

  • Yii: Multi-language website - best practices.

    - by michal
    Hi, I find Yii great framework, and the example website created with yiic shell is a good point to start... however it doesn't cover the topic of multi-language websites, unfortunately. The docs covers the topic of translating short messages, but not keeping the multi-lingual content ... I'm about to start working on a website which needs to be in at least two languages, and I'm wondering what is the best way to keep content for that ... The problem is that the content is mixed extensively with common elements (like embedded video files). I need to avoid duplicating those commons ... so far I used to have an array of arrays containing texts (usually no more than 1-2 short paragraphs), then the view file was just rendering the text from an array. Now I'd like to avoid keeping it in arrays (which requires some attention when putting double quotations " " and is inconvenient in general...). So, what is the best way to keep those short paragraphs? Should I keep them in DB like (id | msg_id | language | content ) and then select them by msg_id & language? That still requires me to create some msg_id's and embed them into view file ... Is there any recommended paradigm for which Yii has some solutions? Thanks, m.

    Read the article

  • Tips on creating user interfaces and optimizing the user experience

    - by Saif Bechan
    I am currently working on a project where a lot of user interaction is going to take place. There is also a commercial side as people can buy certain items and services. In my opinion a good blend of user interface, speed and security is essential for these types of websites. It is fairly easy to use ajax and JavaScript nowadays to do almost everything, as there are a lot of libraries available such as jQuery and others. But this can have some performance and incompatibility issues. This can lead to users just going to the next website. The overall look of the website is important too. Where to place certain buttons, where to place certain types of articles such as faq and support. Where and how to display error messages so that the user sees them but are not bothering him. And an overall color scheme is important too. The basic question is: How to create an interface that triggers a user to buy/use your services I know psychology also plays a huge role in how users interact with your website. The color scheme for example is important. When the colors are irritating on a website you just want to click away. I have not found any articles that explain those concept. Does anyone have any tips and/or recourses where i can get some articles that guide you in making the correct choices for your website.

    Read the article

  • Open source Java CMS for Google App Engine?

    - by markvgti
    I am looking for an open source Java CMS (Web CMS, actually) to run on Google App Engine. I have looked at related older questions on this topic (What CMS runs on Google AppEngine?, CMS over Google App Engine, with SEO etc.) but the problem is that they all largely list Python-based CMSes. Plus these questions are pretty old, and since GAE is a fast-moving target, I thought it might be worthwhile to ask again. I want a CMS for creating some websites (for myself and for others), but would rather not start writing one from scratch. A "good" (very subjective, I know) open source WCMS allows me to start using a product, while still being able to add to/extend the product/project. On the one hand I am looking for a somewhat mature product/project, on the other hand it's easier to start contributing to the development cycle of a young product/project (conflicting, I know :-). Here are some features that would be preferable: [X]HTML/XML/CSS based templating Ability to create multiple blogs Galleries Ability to create a "Downloads" section (is this pretty much standard?) Separate management for digital assets (images, PDFs, binary files etc.) Roles like "Administrator", "Editor", "Contributor" etc. (or their equivalents) Ability to move/reorganize pages Export to PDF Reformat content for printing Is the CMS you are about to suggest especially well-suited to publishing an online book? My idea is that while the book may be offered as a downloadable eBook, the latest, most current version will be the one available on the website.

    Read the article

  • Excluding a script from the general UrlRewrite rules

    - by Steven
    Hi, I have following rewrite rules for a website: RewriteEngine On # Stop reading config files RewriteCond %{REQUEST_FILENAME} .*/web.config$ [NC,OR] RewriteCond %{REQUEST_FILENAME} .*/\.htaccess$ [NC] RewriteRule ^(.+)$ - [F] # Rewrite to url RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !^(/bilder_losning/|/bilder/|/gfx/|/js/|/css/|/doc/).* RewriteRule ^(.+)$ index.cfm?smartLinkKey=%{REQUEST_URI} [L] Now I have to exclude a script including its eventually querystrings from the above rules, so that I can access and execute it on the normal way, at the moment the whole url is being ignored and forwarded to the index page. I need to have access to the script shoplink.cfm in the root which takes variables tduid and url (shoplink.cfm?tduid=1&url=) I have tried to resolve it using this: # maybe?: RewriteRule !(^/shoplink.cfm [QSA] but to be honest, I have not much of a clue of urlrewriting and have no idea what I am supposed to write. I just know that above will generate a nice 500 error. I have been looking around a lot on stackoverflow and other websites on the same subject, but all I see is people trying to exclude directories, not files. In the worst case I could add the script to a seperate directory and exclude the directory from the rewriterules, but rather not since the script should really remain in the root. Just also tried: RewriteRule ^/shoplink.cfm$ $0 [L] but that didn't do anything either. Anyone who can help me out on this subject? Thanks in advance. Steven Esser ColdFusion programmer

    Read the article

  • Server port 16080 problem: webserver adds the port number 16080 in the URL

    - by Juri
    Hello everybody. On my wordpress website one little thing doesn't work. Sometimes the Webserver adds the port number 16080 in the URL, which leads to an error (network timeout). Wrong: http://www.example.com:16080/about-us/weekly-program/?month=may&yr=2010 Correct: http://www.example.com/about-us/weekly-program/?month=may&yr=2010 Has someone a "Server port 16080 problem fix"? Is it possible, that I need to add a ServerName directive to the config file to tell the domain name of the server? Cheers Juri !!!update!!! PS: Here is the site configuration, don't ask me to change the 16080 to 80, because that screwes up everyone else's websites... Please let me know what you think of the configuration: ## Default Virtual Host Configuration <VirtualHost *:16080> ServerName example.com ServerAdmin [email protected] DocumentRoot "/Library/WebServer/Documents/WMsites/example.com/wordpress" DirectoryIndex "index.html" "index.php" CustomLog "/var/log/httpd/access_log" "%{PC-Remote-Addr}i %l %u %t \"%r\" %>s %b" ErrorLog "/var/log/httpd/error_log" ErrorDocument 404 /error.html <IfModule mod_ssl.c> SSLEngine Off SSLLog "/var/log/httpd/ssl_engine_log" SSLCertificateFile "/etc/certificates/Default.crt" SSLCertificateKeyFile "/etc/certificates/Default.key" SSLCipherSuite "ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:+LOW:!SSLv2:+EXP:+eNULL" </IfModule> <IfModule mod_dav.c> DAVLockDB "/var/run/davlocks/.davlock100" DAVMinTimeout 600 </IfModule> <Directory "/Library/WebServer/Documents/WMsites/example.com/wordpress"> Options All -Indexes -ExecCGI -Includes +MultiViews <IfModule mod_dav.c> DAV Off </IfModule> AllowOverride All </Directory> <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^TRACE RewriteRule .* - [F] </IfModule> <IfModule mod_alias.c> </IfModule> LogLevel warn ServerAlias www.example.com ServerAlias example.com </VirtualHost>

    Read the article

  • How to best handle exception to repeating calendar events

    - by blcArmadillo
    I'm working on a project that will require me to implement a calendar. I'm trying to come up with a system that is very flexible: can handle repeating events, exceptions to repeats, etc. I've looked at the schema for applications like iCal, Lotus Notes, and Mozilla to get an idea of how to go about implementing such a system. Currently I'm having trouble deciding what is the best way to handle exceptions to repeating events. I've used databases quite a bit but don't have a ton of experience with really optimizing everything so I'm not sure which method of the two I'm considering would be optimal in terms of overall performance and ability to query/search: Breaking the repeating event. So taking the changing the ending date on the current row for the repeating event, inserting a new row with the exception, and adding another row continuing the old sequence. Simply adding an exception. So adding a new row with some field that indicates it as an override. So here is why I can't decide. Method one will result in a lot more rows since each edit requires 2 extra rows as apposed to only one row by the second method. On the other hand I think the query to find an event would be much simper, and thus possibly faster(?) using the first method. The second method seems like it will require more calculating on the application server since once you get the data you'll have to remove the intersection of the two rows. I know databases are often the bottleneck for websites and while I'm sure a lot of you are thinking either is fine because your project will probably never get large enough for the difference in efficiency to really matter, I'd still like to implement the best solution. So what method would you guys pick, or would you do something completely different? Also, as a side note I'll be using MySQL and PHP. If there is another technology that you think would be better suited for this, especially in the database area, please mention it. Thanks for the advice.

    Read the article

  • Web scraping etiquette

    - by Ash
    I'm considering writing a simple web scraping application to extract information from a website that does not seem to specifically prohibit this. I've checked for other alternatives (eg RSS, web service) to get this information, but there are none available at this stage. Despite this I've also developed/maintained a few websites myself and so I realize that if web scraping is done naively/greedily it can slow things down for other users and generally become a nuisance. So, what etiquette is involved in terms of: Number of requests per second/minute/hour. HTTP User Agent content. HTTP Referer content. HTTP Cache settings. Buffer size for larger files/resources. Legalities and licensing issues. Good tools or design approaches to use. Robots.txt, is this relevant for web scraping or just crawlers/spiders? Compression such as GZip in requests. Update Found this relevant question on Meta: Etiquette of Screen Scaping StackOverflow. Jeff Atwood's answer has some helpful recommendations. Other related StackOverflow questions: Options for html scraping Legalities of screen scraping

    Read the article

  • Problem with Tapestry palette's arrow icons in IE8

    - by JellyHead
    I'm using Tapestry to create pages for a web app, and have been using the palette component to add/delete items to/from a group. The page looks great in Firefox (Tapestry seems biased towards Firefox), but my customers will all be using Internet Explorer (any versions from 6, 7, & 8) and in IE8, the disabled arrow buttons look awful. In Firefox, they are faded, using an opacity setting of 25%, but this doesn't work in IE8 and instead you get a faded image with an ugly black border around the image. In tapestry-core's stylesheet (default.css), you have the following for a disabled arrow button. DIV.t-palette-controls BUTTON[disabled] IMG { filter: alpha(opacity = 25); -moz-opacity: .25; } These are clearly out of date, as -moz-opacity is no longer supported by Firefox (use opacity: 25 instead). The problem is with filter: "alpha(opacity = 25);". If I remove this, the arrows look fine in IE8, but they are not faded. I got the magic instruction: -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(opacity=25)"; from various websites, but putting this in does not work either - the arrow icons are ugly again. The icon itself (distributed with Tapestry) just seems to be a regular PNG, but I'm not an expert on image formats, so maybe there's a problem there? Anyone else had this problem?

    Read the article

  • Python for a hobbyist programmer ( a few questions)

    - by Matt
    I'm a hobbyist programmer (only in TI-Basic before now), and after much, much, much debating with myself, I've decided to learn Python. I don't have a ton of free time to teach myself a hundred languages and all programming I do will be for personal use or for distributing to people who need them, so I decided that I needed one good, strong language to be good at. My questions: Is python powerful enough to handle most things that a typical programmer might do in his off-time? I have in mind things like complex stat generators based on user input for tabletop games, making small games, automate install processes, and build interactive websites, but probably a hundred things along those lines Does python handle networking tasks fairly well? Can python source be obscufated (mispelled I think), or is it going to be open-source by nature? The reason I ask this is because if I make something cool and distribute it, I don't want some idiot script kiddie to edit his own name in and say he wrote it And how popular is python, compared to other languages. Ideally, my language would be good and useful with help found online without extreme difficulty, but not so common that every idiot with computer knows python. I like the idea of knowing a slightly obscure language. Thanks a ton for any help you can provide.

    Read the article

  • Java and tomcat vs ASP.NET and IIS

    - by Mark Cooper
    Until recently I'd considered myself to be a pretty good web programmer (coming up for 10yrs commercial experience on a variety of e-commerce, static and enterprise applications). I'm self taught and have always used the Microsoft product stack (ASP, ASP.NET)... My applications are always functional, relatively bug free, but have never been lightening quick. As a frequent web user I always found this to be the norm... how fast are the websites from the big tech players (eBay, Facebook, Microsoft, IBM, Dell, Telerik etc etc) - in truth none are particularly fast. I always attributed this to "the way things are with web apps"... ...then I cam across a product called Jira from atlasian and this has stopped me in my tracks... This application is fast, and I mean blindingly fast.. too fast to time the switches between pages, fully live content, lots of images and data and cross references etc etc... I run this on an intranet, with a large application DB, and this is running on a very normal server (single processor, SATA HDD, 8GB RAM). Am I missing something?? Are my programming techniques that bad?? I am wondering if this speed gain is down to it being written in Java and running on Tomcat. Does anyone have any benchmarks to compare JSP / ASP or Tomcat / IIS??? Thanks, Mark NOTE: this isn't a blatant plug for Jira. I don't work for them or have any affiliation to them... but I would like to be able to write applications like them :)

    Read the article

  • Why do Asp.net timers/updatepanels leak memory and can it be fixed/worked around?

    - by KallDrexx
    I have built a suite of internal websites for our company to manage some of our processes. I have been noticing that these pages have massive memory leaks that cause the pages to be using well over 150mb of memory, which is ridiculous for a webpage that consists of a single form and a GridView that is displaying 7-10 rows of data at a time, sometimes with the data not changing for a whole day. This data does need to be refreshed on a semi-regular basis so that we always see the latest results and can act on them. After some testing it appears that the memory leak is extremely easy to reproduce, and very noticeable. I created a page with the following asp.net markup: <body> <form id="form1" runat="server"> <div> <asp:scriptmanager ID="Scriptmanager1" runat="server"></asp:scriptmanager> <asp:Timer ID="timer1" runat="server" Interval="1000" /> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> </ContentTemplate> </asp:UpdatePanel> </div> </form> </body> There is absolutely no code behind for this. This is the entirety of the page. Running this site in Chrome shows the memory usage shoot up to 25 megs in the span of 20-30 seconds. Leaving it running for a few minutes makes the memory go up to the 70 megs and such. Am I using timers and update panels wrong, or is this a pure Asp.net issue with no work around?

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >