Search Results

Search found 72000 results on 2880 pages for 'web page tracking'.

Page 28/2880 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Detect frameworks and/or CMS utilized on websites in Firefox

    - by jkneip
    I'm redesigning the website for my academic library and am examining other sites to determine to identify the technologies used. Things like: Web frameworks Javascript frameworks Server-side technology Content management system Now I've had some real success in Firefox using plugins like Wappalyzer, Firebug, and the DOM Inspector. But some sites just don't display any of the info. I'm looking for using these tools, especially it seems it an enterprise-level CMS is being used. Does anyone know of any other tools to detect this kind of data? Also with Firebug & the DOM Inspector, there is a lot of info. displayed and I wondered if there was a way to derive the presence of server-side technologies, CMS's, etc. within certain elements of a web page? Also, if this question is more relevant to another Stack Exchange site, please let me know and I'll post it there instead. Much thanks, Jason

    Read the article

  • Web application Project management methodologie

    - by dutchiexl
    I am looking to streamline my company's web development process. Including analysis. I myself am specialized in XP and Scrum. But we are building web application with a process cycle of 3-4 weeks and a lifetime of 1-4 months. When a project is sold, only then the project managers (= people who do analysis but know nothing about it = a small flow chart and some screen shots as analysis) What is happening is: A LOT of change requests Minimal development time Minimal analysis time NOW: the main question :) can you recommend me some methodologies and books to read for the entire project management ? Thanks in advance @Edit, I myself was looking at a combination of SCRUM for the management with flowcharts, + RAD/LD for development, and trying to distilate something from that.

    Read the article

  • Photoshop Elements 9 VS Paintshop Photo Pro X3 For Web Design

    - by Brian
    I need a good image creation program for web design. I have downloaded both Elements 9 and Paintshop X3. So far I have found them both to be great programs. X3 seems like it has a lot of features, Elements seems like it's quite easy and stable to use. I think I'm going to go with Elements, but I wanted to get other opinions. Which program do you guys like better overall? What things do you think they lack for image creation/editing pertaining to web design, or what features do they have that are great for it? Thanks!

    Read the article

  • Google Analytics - Unable to get GA Tracking

    - by Pure.Krome
    We've been using GA for a few years with no probs. About 2-3 weeks ago we tried to clean up some of our tracking and on one of our profiles, it's not working anymore (since oct 10.) First, some context then some GA Debugging code. 1. Context. We have the following setup: different root domains AND different sub-domains on one of the root domains. www.website.com www.website.com.au www.anotherWebsite.com foo.website.com baa.website.com So what we're doing is the following: each root domain and each sub-domain get their own tracking code. This way we can allow separate people (from outside our company) to access only their own data. Eg. a manager for foo.website.com can only see data related to that domain .. and see data on the other domains. Have a last account which is the SUM of all the domains. this is for us. so we can see total numbers. So to do this, we have two trackers that fire off, on the page. the individual accounts all work fine - they seem to be tracking data ok. the 'global' account is not working and this gives us the = Tracking Not Installed error. This has been going on since oct 10. So the wait 24/48/72 hours thing is waaaaay over. 2. GA Debug code. Installing GA Debug chrome extension gives the following output. I've tried to hide anything that could be considered secret. UA-XXXXX34-1 == Global account (which isn't working any more). UA-XXXXX34-11 == Specific account for www.website.com _gaq.push processing "_setAccount" for args: "[UA-XXXXX34-1]": ga_debug.js:18 _gaq.push processing "_setDomainName" for args: "[website.com]": ga_debug.js:18 _gaq.push processing "_setAllowLinker" for args: "[true]": ga_debug.js:18 _gaq.push processing "_trackPageview" for args: "[]": ga_debug.js:18 Track Pageview ga_debug.js:18 Tracking beacon sent! utmwv=--snipped-- Account ID : UA-XXXX234-1 Page Title : Some page title Host Name : www.website.com Page : / Referring URL : - Hit ID : 1923583969 Visitor ID : 785310647 Session Count : 51 Session Time - First : Thu Aug 23 2012 15:20:17 GMT 1000 (AUS Eastern Standard Time) Session Time - Last : Mon Oct 29 2012 11:41:46 GMT 1100 (AUS Eastern Summer Time) Session Time - Current : Mon Oct 29 2012 12:19:23 GMT 1100 (AUS Eastern Summer Time) Campaign Time : Thu Aug 23 2012 15:20:17 GMT 1000 (AUS Eastern Standard Time) Campaign Session : 1 Campaign Count : 1 Campaign Source : (direct) Campaign Medium : (none); Campaign Name : (direct) Language : en-gb Encoding : UTF-8 Flash Version : 11.4 r31 Java Enabled : true Screen Resolution : 1050x1680 Browser Size : 1033x861 Color Depth : 32-bit Ga.js Version : 5.3.7d Cachebuster : 1846514973 ga_debug.js:18 _gaq.push processing "_setAccount" for args: "[UA-XXXX234-11]": ga_debug.js:18 _gaq.push processing "_setDomainName" for args: "[website.com]": ga_debug.js:18 _gaq.push processing "_setAllowLinker" for args: "[true]": ga_debug.js:18 _gaq.push processing "_trackPageview" for args: "[]": ga_debug.js:18 Track Pageview ga_debug.js:18 Tracking beacon sent! utmwv=--snipped-- Account ID : UA-XXXX234-11 Page Title : SomePageTitle Host Name : www.website.com Page : / Referring URL : - Hit ID : 1923583969 Visitor ID : 785310647 Session Count : 51 Session Time - First : Thu Aug 23 2012 15:20:17 GMT 1000 (AUS Eastern Standard Time) Session Time - Last : Mon Oct 29 2012 11:41:46 GMT 1100 (AUS Eastern Summer Time) Session Time - Current : Mon Oct 29 2012 12:19:23 GMT 1100 (AUS Eastern Summer Time) Campaign Time : Thu Aug 23 2012 15:20:17 GMT 1000 (AUS Eastern Standard Time) Campaign Session : 1 Campaign Count : 1 Campaign Source : (direct) Campaign Medium : (none); Campaign Name : (direct) Language : en-gb Encoding : UTF-8 Flash Version : 11.4 r31 Java Enabled : true Screen Resolution : 1050x1680 Browser Size : 1033x861 Color Depth : 32-bit Ga.js Version : 5.3.7d Cachebuster : 1580443754 and this is the js code he have. BTW, it is inside a <head></head> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push( ['_setAccount', 'UA-XXXX234-1'], ['_setDomainName', 'website.com'], ['_setAllowLinker', true], ['_trackPageview'] ,['b._setAccount','UA-XXXX234-11'], ['b._setDomainName','website.com'], ['b._setAllowLinker',true], ['b._trackPageview'] ); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Finally, I've triple checked that the UA is the correct text. and yes, the global account is -1 and the specific domain is -11. Anyone have any suggestions to help?

    Read the article

  • ASP.NET Podcast Show #143 - Windows Azure Part I - Web Roles

    - by Wallym
    Original Url: http://aspnetpodcast.com/CS11/blogs/asp.net_podcast/archive/2010/10/25/asp-net-podcast-show-143-windows-azure-part-i-web-roles.aspx (forgot to post this here)This show is on Web Roles in Azure, Blob Storage, and the Visual Studio 2010 Azure tools. Subscribe to everything. Subscribe to WMV. Subscribe to M4V for iPhone/iPad. Subscribe to MP3. Download WMV. Download MOV. Download M4V for iPhone/iPad. Download MP3.

    Read the article

  • Detect frameworks and/or CMS utilized on websites in Firefox

    - by jkneip
    I'm redesigning the website for my academic library and am examining other sites to determine to identify the technologies used. Things like: Web frameworks Javascript frameworks Server-side technology Content management system Now I've had some real success in Firefox using plugins like Wappalyzer, Firebug, and the DOM Inspector. But some sites just don't display any of the info. I'm looking for using these tools, especially it seems it an enterprise-level CMS is being used. Does anyone know of any other tools to detect this kind of data? Also with Firebug & the DOM Inspector, there is a lot of info. displayed and I wondered if there was a way to derive the presence of server-side technologies, CMS's, etc. within certain elements of a web page? Also, if this question is more relevant to another Stack Exchange site, please let me know and I'll post it there instead. Much thanks, Jason

    Read the article

  • Finding right bug tracker web application

    - by FullmetalBoy
    I'm looking for a bug tracking system (similiar as http://www.mantisbt.org) with these following requirements: Requirement specifications are: Upload picture and different files without any limitaton of the file's space. If user belong to a specific group or businesss group, the GUI:s logotype should be changed into group's logotype after the user has entered user name and password. All user use the same database with different GUI. Right information of unassigned, modified and resolved bug case will be displayed based on what group or business group that the user is assigned to. All information will display after the user has logged in. Enable to create 1 or more super user (administrator) and regular user. Right user with right group or business group will retrieve right information. Non functional requirement: The bugging tracker take place in a webbapplication. All information must use web browser to view all information.

    Read the article

  • How do you develop web applications? [closed]

    - by ck3g
    How do you and/or your team develop your web applications? Language, framework or platform doesn't matter. I would like to know about the structure of your environment. For example: Using IDE on workstation and project files on remote host accessing via sftp. Files are saved instantly on remote host; All files are local and are uploaded on remote host during saving; Files are local, web server is running on local computer and is tested at local host. etc. You could write down also about the benefits of your approach, this will be useful for me. Thanks upd: Here must be a question and here it is: which is the best approach by your opinion?

    Read the article

  • Drawing shapes dynamically on an image through web browser

    - by Tom Beech
    We have a scenario where we create floor plans of locations when we visit. The floor plan is finally shown on the web. It's come to the point now where we want to show floor plans but have a key with various items on them, when an item on the key is clicked, the image should highlight all the areas of the floorplan that have that specific item. I guess we're looking for some sort of open standard javascript lib to deal with SVG (has to work pre IE9 so pure SVG wont cut it) and the floor plans have to be able to be created through a .net application to be deployed on the web. I'd rather stay away from flash if at all possible to be honest. Below are a few conceptual images of what we're trying to achieve.

    Read the article

  • synchronization web service methodologies or papers

    - by Grady Player
    I am building a web service (PHP+JSON) to sync with my iphone app. The main goals are: Backup Provide a web view for printing / sorting, manipulating. allow a group sync up and down. I am aware of the logic problems with all of these items, Ie. if one person deletes something, do you persist this change to other users, collisions, etc. I am looking for just any book or scholarly work, or even words of wisdom to address common issues. when to detect changes of data with hashes, vs modified dates, or combination. how do address consolidation of sequential ID's originating on different client nodes (can be sidestepped in my context, but it would be interesting.) dealing with collisions (is there a universally safe way to do so?). general best practices. how to structure the actual data transaction (ask for whole list then detect changes...)

    Read the article

  • Starting web development with ASP.Net [closed]

    - by nayef harb
    Possible Duplicate: Fastest way to get up to speed on webapp development with ASP.NET? If you develop with ASP.NET, which other technologies do you use? How much do i need to learn in order to get an entry level asp.net job? training plan for asp.net and c# Trying to learn ASP.NET What should every programmer know about web development? I learned web development in ASP.Net couple of month ago in college, nothing serious just couple of general lessons. But now I am confused where to start, should I start with HTML and JavaScript before ASP.Net?

    Read the article

  • Web interface with FastCGI or with direct HTTP?

    - by Basile Starynkevitch
    Let's assume I want (for fun at start) to play with some new DSL (domain specific language) idea. And I really want its user[s] (probably only me at first) to interact thru a web interface. I'll probably implement it in C++ (probably using LLVM). Should I use an HTTP server library (like libonion or microhttpd) to talk directly HTTP or should I use FastCGI? In particular, I am noticing that several recent web frameworks (Opa, Ocsigen, ...) do not have any FastCGI interface but only HTTP one.... So my feeling is that FastCGI is really out of fashion.... Any opinions on that? Do you know recently started project using FastCGI ? (and what about SCGI?)

    Read the article

  • Web development tools/approaches?

    - by Clinton
    My day job involves a bit of programming, but I've recently been attempting some web development for personal reasons. I've got Drupal up and running and done basic things like add new content (i.e. heading and text) and add modules and themes, but I'm not sure how to approach actually designing pages. When I mucked around with webpages 15 years ago, it was just a mixture of HTML, CSS and Javascript, generally written with a text editor. Have things changed, or is this the way I'd make a Drupal page today? If it makes a difference, in my case the page's I want to design simply have static content, but I'd like them to be easily updatable.

    Read the article

  • .aspx websites: Is it built using web forms?

    - by Lazeera
    I visit many website which I think is built using ASP.NET web forms because of the extension (.aspx). When I view source of these website I see at least one or two something like: <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE"> or wvcD4NCjxwPtin2YTZh9iv2YrYqSDYp9mE2KvYp9mG2YrYqSDZh9mKINit2..... However, yesterday I visited two sites on is the 'ASP.NET forums - http://forums.asp.net' and the other is 'POF'. The extension of these sites is still (.aspx) but when I view the source of these site I could not find any <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE"> nor wvcD4NCjxwPtin2YTZh9iv2YrYqSDYp9mE2KvYp9mG2YrYqSDZh9mKINit2..... Now, I would like to know how those sites use ASP.NET Web Forms and their final HTML output is still clean?

    Read the article

  • Internal and external API architecture

    - by Tacomanator
    The company I work for maintains a successful SaaS product that grew "organically" over the years. We are planning to expand the line with a suite of new products that will share data with the existing product. To support this, we are looking to consolidate business logic into a single place: a web service layer. The WS layer will be used by: The web applications A tool to import data A tool to integrate with other client software (not an API per se) We also want to create an API that can be used by our customers that are capable of using it to create their own integrations. We are struggling with the following question: Should the internal API (aka the WS layer) and the external API be one in the same, with security and permission settings to control what can be done by who, or should they be two separate applications where the external API just calls the internal API like any other application? So far in our debate it seems that separating them may be more secure, but will add overhead. What have others done in a similar situation?

    Read the article

  • Erlang web frameworks survey

    - by Zachary K
    (Inspired by similar question on Haskel) There are several web frameworks for Erlang like Nitrogen, Chicago Boss, and Zotonic, and a few more. In what aspects do they differ from each other? For example: features (e.g. server only, or also client scripting, easy support for different kinds of database) maturity (e.g. stability, documentation quality) scalability (e.g. performance, handy abstraction) main targets Also, what are examples of real-world sites / web apps using these frameworks? EDIT: Starting a bounty in hopes that it will get some conversation going

    Read the article

  • Web technologies on GUI apps

    - by Apalala
    I developed many GUI applications for the Windows platform during my early professional career, and saw several GUI frameworks come, have whole magazines devoted to them, and then fade away. MFC is iconic. Tasked with writing yet another GUI application, I starter researching cross-platform frameworks like Qt and WxWindows. I found the same steep learning curves I knew from before, and tooling doesn't help much in building a functional and elegant user interface because its clumsy and complicated. But people are building beautiful and functional UIs on the Web all the time (look at this site!). The standards, the libraries, and the tools are certainly there. My thought and my question: Why not write a GUI in which most of the UI is handled by an embedded browser? I already know that the Qt widgets support a large part of CSS and JavaScript, and programmers with good knowledge about web development are relatively easy to find, ..., so... Have you done something like that before? What's your experience/advise?

    Read the article

  • Registration free hosting for ASP.NET web service

    - by Andrew
    I've built a simple ASP.NET web service, tested it locally and would like to test it when externally hosted. Are there free hosting services available where I can just upload the assembly and service description file and test it straight away. Without registering the account, etc. My service does not do anything malicious and I am ok to run it in a restricted (security sandbox, bandwith, calls per second, etc) environment? I have heard about appharbor.com but it looks like an overkill to test a simple web service.

    Read the article

  • Web Apps vs Web Services: 302s and 401s are not always good Friends

    - by Your DisplayName here!
    It is not very uncommon to have web sites that have web UX and services content. The UX part maybe uses WS-Federation (or some other redirect based mechanism). That means whenever an authorization error occurs (401 status code), this is picked by the corresponding redirect module and turned into a redirect (302) to the login page. All is good. But in services, when you emit a 401, you typically want that status code to travel back to the client agent, so it can do error handling. These two approaches conflict. If you think (like me) that you should separate UX and services into separate apps, you don’t need to read on. Just do it ;) If you need to mix both mechanisms in a single app – here’s how I solved it for a project. I sub classed the redirect module – this was in my case the WIF WS-Federation HTTP module and modified the OnAuthorizationFailed method. In there I check for a special HttpContext item, and if that is present, I suppress the redirect. Otherwise everything works as normal: class ServiceAwareWSFederationAuthenticationModule : WSFederationAuthenticationModule {     protected override void OnAuthorizationFailed(AuthorizationFailedEventArgs e)     {         base.OnAuthorizationFailed(e);         var isService = HttpContext.Current.Items[AdvertiseWcfInHttpPipelineBehavior.DefaultLabel];         if (isService != null)         {             e.RedirectToIdentityProvider = false;         }     } } Now the question is, how do you smuggle that value into the HttpContext. If it is a MVC based web service, that’s easy of course. In the case of WCF, one approach that worked for me was to set it in a service behavior (dispatch message inspector to be exact): public void BeforeSendReply( ref Message reply, object correlationState) {     if (HttpContext.Current != null)     {         HttpContext.Current.Items[DefaultLabel] = true;     } } HTH

    Read the article

  • Web application deployment and Dependencies

    - by Reith
    I have a free software web application that using other free software scripts for appearance. I have trouble to decide whether should I copy source code of used scripts to my project main repository or list them as dependencies and ask user to install them himself? Since some of scripts solving browser compatibilities issues and I'm not a good web designer (i hate to check my web site on IE to see compatibility) using the newest version of scripts is preferable and second solution works here. But it has problem with scripts aren't backward-compatible with versions I've used them for development. Maybe another method is well-known for this issues that I don't know them.

    Read the article

  • AngularJS: structuring a web application with multiple ng-apps

    - by mg1075
    The blogosphere has a number of articles on the topic of AngularJS app structuring guidelines such as these (and others): http://www.johnpapa.net/angular-app-structuring-guidelines/ http://codingsmackdown.tv/blog/2013/04/19/angularjs-modules-for-great-justice/ http://danorlando.com/angularjs-architecture-understanding-modules/ http://henriquat.re/modularizing-angularjs/modularizing-angular-applications/modularizing-angular-applications.html However, one scenario I have yet to come across for guidelines and best practices is the case where you have a large web application containing multiple "mini-spa" apps, and the mini-spa apps all share a certain amount of code. I am not referring to the case of trying to have multiple ng-app declarations on the same page; rather, I mean different sections of a large site that have their own, unique ng-app declaration. As Scott Allen writes in his OdeToCode blog: One scenario I haven't found addressed very well is the scenario where multiple apps exist in the same greater web application and require some shared code on the client. Are there any recommended approaches to take, pitfalls to avoid, or good sample structures of this scenario that you can point to?

    Read the article

  • Internal and external API architecture

    - by Tacomanator
    The company I work for maintains a successful SaaS product that grew "organically" over the years. We are planning to expand the line with a suite of new products that will share data with the existing product. To support this, we are looking to consolidate business logic into a single place: a web service layer. The WS layer will be used by: The web applications A tool to import data A tool to integrate with other client software (not an API per se) We also want to create an API that can be used by our customers that are capable of using it to create their own integrations. We are struggling with the following question: Should the internal API (aka the WS layer) and the external API be one in the same, with security and permission settings to control what can be done by who, or should they be two separate applications where the external API just calls the internal API like any other application? So far in our debate it seems that separating them may be more secure, but will add overhead. What have others done in a similar situation?

    Read the article

  • Loading main javascript on every page? Or breaking it up to relevant pages?

    - by Kyle
    I have a 700kb decompressed JS file which is loaded on every page. Before I had 12 javascript files on each page but to reduce http requests I compressed them all into 1 file. This file is ~130kb gzipped and is served over gzip. However on the local computer it is still unpacked and loaded on every page. Is this a performance issue? I've profiled the javascript with firebug profiler but did not see any issues. The problem/illusion I am facing is there are jquery libraries compressed in that file that are sometimes not used on the current page. For example jquery datatables is 200kb compressed and that is only loaded on 2 of my website pages. Another is jqplot and that is another 200kb. I now have 400kb of excess code that isn't executed on 80% of the pages. Should I leave everything in 1 file? Should I take out the jquery libraries and load only relevant JS on the current page?

    Read the article

  • Are web application usability issues equal to website usability issues?

    - by Kor
    I've been reading two books about web usability issues and tests (Rocket Surgery Made Easy¹ and Prioritizing Web Usability²) and they claim some strategies and typical problems about website usability and how to lead them. However, I want to do a web application, and I think I lost track of what I am trying to solve. These two books claim to work with raw websites (e-commerce, business sites, even intranet), but I'm not sure if everything about web usability is applicable to web application usability. They sure talk about always having available (and usable) the Back button, to focus on short information rather than big amounts of text, etc., but they could be inaccurate in deeper problems that may be easier (or just skippable) in regular websites. Has anybody some experience in this field and could tell me if both web applications and websites share their usability issues? Thanks in advance Edit: Quoting Wikipedia, a website is a collection of related web pages containing images, videos or other digital assets, and a web application is an application that is accessed over a network such as the Internet or an intranet. To sum up, both shows/lets you search/produce information but websites are "simple" in interaction and keep the classics of websites (one-click actions) and the other one is closer to desktop applications in the meaning of their uses and ways of interaction (double click, modal windows, asynchronous calls [to keep you in the same "environment" instead of reloading it] etc.). I don't know if this clarifies the difference. Edit 2: Quoting @Victor and myself, a website is anything running in your browser, but a web application is somewhat running in your browser that could be running in your desktop, with similar behaviors and features. Gmail is a web application that could replace Outlook. GDocs could replace Office. Grooveshark could replace your music player, etc.

    Read the article

  • Career Change Need Advice: Professional Web Developer

    - by bikedorkseattle
    I'm hoping to get some advice here on the steps I should take to make a career change into professional web development. I've been working in cancer research the last 14 years and I need a change. The job market is terrible, the pay is worse, and despite what one would think the atmosphere is generally un-collegial, even in your own group. Venture funding never returned after the dot com burst and with 3 to 5 wars our country is now in, NIH funding is only going to get worse. I know things are not going to get better for my field, sadly, and I know I need to move on. For probably just as long I have fiddled around with web development, I even run a fairly popular site with close to 1 million/month pageviews that pulls a decent income, but not stable enough to live off of right now. My skills are ok for being self taught. I enjoy the fast paced nature of the web and the tools the community creates and how eager people are to help and share knowledge; it's what science should be. I have been trying to find an entry level developer job doing standard HTML/CSS/PHP/MySQL/JS/jQuery type work. A good 50%+ of the jobs want someone with a CS degree, and most want 5 years experience. Having no professional experience and no formal education, I know I'm at a huge disadvantage. I am now considering my options on how to move forward professionally. The way I see it I have basically 3 options. Build up my portfolio of work as much as I can and continue to learn as much as I can on my own. Try to contribute on some open source project when time allows. Network like crazy and go to meetups. Be confident and pray a lot in private. OR While doing above, do some certification programs in PHP and Java, possibly others. Get a Zend Certification. OR Spend a few years getting a CS degree while doing 1. I've already done the work fulltime go to school thing and it doesn't excite me one bit. I didn't have the greatest college experience and am not too eager to return, but I have a family to feed. Is the degree really necessary or is it more of a right of passage type thing in most instances? I appreciate everyones input. Thanks for taking the time to respond.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >