Search Results

Search found 37739 results on 1510 pages for 'static site'.

Page 172/1510 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Adjust static value into dynamic (javascript) value possible in Sharepoint allitems.aspx page?

    - by lerac
    <SharePoint:SPDataSource runat="server" IncludeHidden="true" SelectCommand="&lt;View&gt;&lt;Query&gt;&lt;OrderBy&gt;&lt;FieldRef Name=&quot;EventDate&quot;/&gt;&lt;/OrderBy&gt;&lt;Where&gt;&lt;Contains&gt;&lt;FieldRef Name=&quot;lawyer_x0020_1&quot;/&gt;&lt;Value Type=&quot;Note&quot;&gt;F. Sanches&lt;/Value&gt;&lt;/Contains&gt;&lt;/Where&gt;&lt;/Query&gt;&lt;/View&gt;" id="datasource1" DataSourceMode="List" UseInternalName="true"><InsertParameters><asp:Parameter DefaultValue="{ANUMBER}" Name="ListID"></asp:Parameter> This codeline is just one line of the allitems.aspx of a sharepoint list item. It only displays items where lawyer 1 = F. Sanches. Before I start messing around with the .ASPX page I wonder if it possible to change F. Sanches (in the code) into a dynamical variable (from a javascript value or something else that can be used to place the javascript value in there dynamically). If I put any javascript code in the line it will not work. P.S. Ignore ANUMBER part in code. Let say to make it simple I have javascript variable like this (now static but with my other code it is dynamic). It would be an achievement if it would place a static javascript variable. <SCRIPT type=text/javascript>javaVAR = "P. Janssen";</script> If Yes -- how? If No -- Thank you!

    Read the article

  • How to remove the upper right close button

    - by tahdhaze09
    I need to kill the close 'x' button at the top of a jQuery UI modal dialog box. I have a modal that opens with an OK button that redirects to the site. The site behind the modal is in an iframe. When the user agrees to the statement in the dialog box and clicks the 'OK' button, it redirects to the site that is outside the iframe. If the user clicks on the 'x', it goes to the iframe site, which I do not want to have happen. I need the modal to work as a one-way to the site it goes to. It basically forces the user to accept the user agreement. I would post code, but its an intranet site. Thanks everyone for your help! EDIT: This site is a government intranet portal, not a commercial site. So the goal is NOT to trap a user into the site, but rather to let the user know that the use of this site is restricted and to make sure you understand and accept the user agreement or you cannot use the site.

    Read the article

  • Error: "an object reference is required for the non-static field, method or property..."

    - by user300484
    Hi! Im creating an application on C#. Its function is to evualuate if a given is prime and if the same swapped number is prime as well. When I build my solution on Visual Studio, it says that "an object reference is required for the non-static field, method or property...". Im having this problem with the "volteado" and "siprimo" methods. Can you tell me where is the problem and how i can fix it? thank you! namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Console.Write("Write a number: "); long a= Convert.ToInt64(Console.ReadLine()); // a is the number given by the user long av = volteado(a); // av is "a" but swapped if (siprimo(a) == false && siprimo(av) == false) Console.WriteLine("Both original and swapped numbers are prime."); else Console.WriteLine("One of the numbers isnt prime."); Console.ReadLine(); } private bool siprimo(long a) {// evaluate if the received number is prime bool sp = true; for (long k = 2; k <= a / 2; k++) if (a % k == 0) sp = false; return sp; } private long volteado(long a) {// swap the received number long v = 0; while (a > 0) { v = 10 * v + a % 10; a /= 10; } return v; } } }

    Read the article

  • Can I have the gcc linker create a static libary?

    - by Lucas Meijer
    I have a library consisting of some 300 c++ files. The program that consumes the library does not want to dynamically link to it. (For various reasons, but the best one is that some of the supported platforms do not support dynamic linking) Then I use g++ and ar to create a static library (.a), this file contains all symbols of all those files, including ones that the library doesn't want to export. I suspect linking the consuming program with this library takes an unnecessary long time, as all the .o files inside the .a still need to have their references resolved, and the linker has more symbols to process. When creating a dynamic library (.dylib / .so) you can actually use a linker, which can resolve all intra-lib symbols, and export only those that the library wants to export. The result however can only be "linked" into the consuming program at runtime. I would like to somehow get the benefits of dynamic linking, but use a static library. If my google searches are correct in thinking this is indeed not possible, I would love to understand why this is not possible, as it seems like something that many c and c++ programs could benefit from.

    Read the article

  • Emptying site collection recycle bin doesn’t make content DB smaller?

    - by Mike
    I deleted everything from a site collection recycle bin and remoted into the SQL server the content database is located on, went to view the WSS_Content and the sucker didn't get smaller. I had about a good 2 or 3 gigs of folders with files in the recycle bin. I just want to make sure that it is getting deleted. Is there something I am missing? Or does the SQL server not update file sizes properly? MOSS2007 IIS6 WinSer2003

    Read the article

  • CodeIgniter site in subdirectory, htaccess file maybe interfering with htaccess file in main directory?

    - by patricksayshi
    In my CodeIgniter site, navigating to any page but the index gives me this error: No input file specified. Googling around, it seems like the cause must have something to do with my .htaccess situation. The way this is set up, and maybe this can eventually change, is that my CI site is in a subdirectory of the main domain. The CI site and main domain each have their own .htaccess files. The CI htacess file is located in the applications folder: <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /SubDomain/index.php?$1 [L] </IfModule> And here's the main htaccess file is two levels up from the CI one, reading thusly: RewriteEngine on RewriteCond %{SERVER_PORT} 80 rewriterule ^(.*)$ https://www.MainDomain.org/$1 [r=301,nc] I am afraid these two sets of re-write rules are conflicting with each other and I really have no idea what to do about it. I can alter either htaccess file and would really like to get them working together in peace and harmony. It's also possible, however, that this has nothing whatsoever to do with htaccess. Also, it's hosted on GoDaddy.

    Read the article

  • Implimenting Zend MVC for my existing site-first step?

    - by Joel
    Hi guys, OK-newbie question here. I'll try not to bombard SO with lots of questions-and hopefully this first one will show me the method I'll need to follow for subsequent conversions. I have a web-based calendar system that I developed, but it was coded for me procedurally (using PHP). I'm now working on learning OO and wanting to integrate this site into my localhost Zend Framework and slowly start converting parts to OO and the Zend Framework MVC process in particular. As I've said before, I understand that this will be a slow process, and when I'm done, I still probably won't have anything as OO friendly as if I had rewritten it from scratch, but I'd like to use this as a learning experience. So, I have dropped the whole site into my localhose/zend/Public folder, and everything is showing up great and linking to the database, etc. My question is-what would be the easiest first component to switch over to the MVC model? This site has a bit of everything-forms, login, authentication, some jQuery, etc. Can anyone point to a tutorial that would address what I'm trying to do? If indeed, a form would be one of the simpler things to switch, can someone walk me through those changes? Another idea is changing over all the header info, etc? Thanks for any pointers on where to start! EDIT: Also, I understand that SO is mainly for specific coding questions-I'm happy to share specific code, once I have an idea about which section to tackle first...

    Read the article

  • IIS 7 - allow http for part of site, https for rest?

    - by Martin Clarke
    In IIS 7, is there a way to set two urls on the same site to allow http and https, and the rest to be https only? - http://mysite/url1 or https://mysite/url1 is accepted and stays on that protocol. - http://mysite/url2 or https://mysite/url2 is accepted and stays on that protocol. - any other item, i.e. http://mysite/whatever redirects to https://mysite/whatever - https://mysite/whatever is accepted. Edited because first question wasn't clear enough.

    Read the article

  • Is this method of static file serving safe in node.js? (potential security hole?)

    - by MikeC8
    I want to create the simplest node.js server to serve static files. Here's what I came up with: fs = require('fs'); server = require('http').createServer(function(req, res) { res.end(fs.readFileSync(__dirname + '/public/' + req.url)); }); server.listen(8080); Clearly this would map http://localhost:8080/index.html to project_dir/public/index.html, and similarly so for all other files. My one concern is that someone could abuse this to access files outside of project_dir/public. Something like this, for example: http://localhost:8080/../../sensitive_file.txt I tried this a little bit, and it wasn't working. But, it seems like my browser was removing the ".." itself. Which leads me to believe that someone could abuse my poor little node.js server. I know there are npm packages that do static file serving. But I'm actually curious to write my own here. So my questions are: Is this safe? If so, why? If not, why not? And, if further, if not, what is the "right" way to do this? My one constraint is I don't want to have to have an if clause for each possible file, I want the server to serve whatever files I throw in a directory.

    Read the article

  • template specialization for static member functions; howto?

    - by Rolle
    I am trying to implement a template function with handles void differently using template specialization. The following code gives me an "Explicit specialization in non-namespace scope" in gcc: template <typename T> static T safeGuiCall(boost::function<T ()> _f) { if (_f.empty()) throw GuiException("Function pointer empty"); { ThreadGuard g; T ret = _f(); return ret; } } // template specialization for functions wit no return value template <> static void safeGuiCall<void>(boost::function<void ()> _f) { if (_f.empty()) throw GuiException("Function pointer empty"); { ThreadGuard g; _f(); } } I have tried moving it out of the class (the class is not templated) and into the namespace but then I get the error "Explicit specialization cannot have a storage class". I have read many discussions about this, but people don't seem to agree how to specialize function templates. Any ideas?

    Read the article

  • Display web page from another site in asp page.

    - by Daniel
    hi all, Our customer has a requirement to extend the functionality of their existing large government project. It is an ASP.NET 3.5 (recently upgraded from 2.0) project. The existing solution is quite a behemoth that is almost unmaintainable so they have decided that they want to provide the new functionality by hosting it on another website that is shown within the existing website. As to how this is best to be done I'm not quite sure right now and if there is any security issues preventing it or that need to be considered. Essentially the user would log on to the existing web site as normal and when cliicking on a certain link the page would load as normal with some kind of frame or control that has within it the contents of the page from the other site. IE. They do not want to simply redirect to the other site they want to show it embedded within the current one such that the existing menus etc are still available. I believe if information needed to be passed to the embedded page it would be done using query strings as I'm not sure if there is even another way to accomplish this. Can anyone give me some pointers on where to start at looking to implement this or any potential pitfalls I should be aware of. Thanks

    Read the article

  • How to set up site specific configuration vs application configuration in Zend Framework?

    - by rbruhn
    Being fairly new to Zend Framework, I've been reading and trying out various tutorials on the web and books I've purchased. One thing all the tutorials do is hard code certain values into into the bootstrap or other code. For example, setting the title: $this-_view-headTitle('MySite'); I realize this can be set in the application.ini file, but I don't think that is appropriate either if you are distributing the application to other sites. I would be interested in hearing ideas where application specific settings are set in the application.ini file and loaded: $application = new Zend_Application( APPLICATION_ENV, APPLICATION_PATH.'/configs/application.ini' ); Then somewhere in the bootstrap, checking for a config.ini file and adding these to currently existing application config array, and if config.ini does not exist, retrieving such site specific configs from a database and writing the config.ini file (Obviously the file deleted and rewritten if a value is changed in the database). I don't need to see how the file is written or what not... just a general idea of how others are handling such things. Or provide different ideas of doing this? I would rather end up using something like this when setting various site specific configurations: $this->_view->headTitle($config->site->title); Hope this makes sense :-)

    Read the article

  • How to make html-files with content to be used in a simple ajax site to behave nicely in google?

    - by metatron
    I made some ajax sites in the past where I used ajax to get more of a desktop application feeling for my sites and also to keep the site maintainable. My strategy was making one index page and from there pulling in html content from some subpages. (So far I didn't use ajax to send data to the server.) The problem that I ran into is this: I want the subpages to be readable by google since they contain valuable content but once they show up in google's results they lead to the naked html-file (no css nor Javascript). I solved this by putting a javascript redirect (window.location = ...) on the subpages so they lead to the correct page. So as an example let's say I have a site at example.com with some javascript and css and a naked content page that should be loaded via ajax: example.com/content.html. Via ajax I pull in what I need from the content file but since my index.html contains href's to the content.html file (I want the content of my ajax site to be readable without Javascript) it will be indexed by google and gets listed in the search results. But I don't want people to see the naked html file. Hence the redirect that goes to the index page and gets handled by some Javascript to show the content as I want it to be showed. I was wondering if there are nicer solutions to this problem or different approaches.

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • South migration error: NoMigrations exception for django.contrib.auth

    - by danpalmer
    I have been using South on my project for a while, but I recently did a huge amount of development and changed development machine and I think something messed up in the process. The project works fine, but I can't apply migrations. Whenever I try to apply a migration I get the following traceback: danpalmer:pest Dan$ python manage.py migrate frontend Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 222, in execute output = self.handle(*args, **options) File "/Library/Python/2.6/site-packages/South-0.7-py2.6.egg/south/management/commands/migrate.py", line 102, in handle delete_ghosts = delete_ghosts, File "/Library/Python/2.6/site-packages/South-0.7-py2.6.egg/south/migration/__init__.py", line 182, in migrate_app applied = check_migration_histories(applied, delete_ghosts) File "/Library/Python/2.6/site-packages/South-0.7-py2.6.egg/south/migration/__init__.py", line 85, in check_migration_histories m = h.get_migration() File "/Library/Python/2.6/site-packages/South-0.7-py2.6.egg/south/models.py", line 34, in get_migration return self.get_migrations().migration(self.migration) File "/Library/Python/2.6/site-packages/South-0.7-py2.6.egg/south/models.py", line 31, in get_migrations return Migrations(self.app_name) File "/Library/Python/2.6/site-packages/South-0.7-py2.6.egg/south/migration/base.py", line 60, in __call__ self.instances[app_label] = super(MigrationsMetaclass, self).__call__(app_label_to_app_module(app_label), **kwds) File "/Library/Python/2.6/site-packages/South-0.7-py2.6.egg/south/migration/base.py", line 88, in __init__ self.set_application(application, force_creation, verbose_creation) File "/Library/Python/2.6/site-packages/South-0.7-py2.6.egg/south/migration/base.py", line 159, in set_application raise exceptions.NoMigrations(application) south.exceptions.NoMigrations: Application '<module 'django.contrib.auth' from '/Library/Python/2.6/site-packages/django/contrib/auth/__init__.pyc'>' has no migrations. I am not that experienced with South and I haven't met this error before. The only helpful mention I can find online about this error is for pre-0.7 I think and I am on South 0.7. I ran 'easy_install -U South' just to make sure. Thanks for any help that you can provide. I really appreciate it.

    Read the article

  • Git for Websites / post-receive / Separation of Test and Production Sites

    - by Walt W
    Hi all, I'm using Git to manage my website's source code and deployment, and currently have the test and live sites running on the same box. Following this resource http://toroid.org/ams/git-website-howto originally, I came up with the following post-receive hook script to differentiate between pushes to my live site and pushes to my test site: while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git --work-tree=c:/temp/BLAH checkout -f master echo "Updated master" ;; refs/heads/testbranch ) git --work-tree=c:/temp/BLAH2 checkout -f testbranch echo "Updated testbranch" ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" However, I have doubts that this is actually safe :) I'm by no means a Git expert, but I am guessing that Git probably keeps track of the current checked-out branch head, and this approach probably has the potential to confuse it to no end. So a few questions: IS this safe? Would a better approach be to have my base repository be the test site repository (with corresponding working directory), and then have that repository push changes to a new live site repository, which has a corresponding working directory to the live site base? This would also allow me to move the production to a different server and keep the deployment chain intact. Is there something I'm missing? Is there a different, clean way to differentiate between test and production deployments when using Git for managing websites? As an additional note in light of Vi's answer, is there a good way to do this that would handle deletions without mucking with the file system much? Thank you, -Walt PS - The script I came up with for the multiple repos (and am using unless I hear better) is as follows: sitename=`basename \`pwd\`` while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git checkout -q -f master if [ $? -eq 0 ]; then echo "Test Site checked out properly" else echo "Failed to checkout test site!" fi ;; refs/heads/live-site ) git push -q ../Live/$sitename live-site:master if [ $? -eq 0 ]; then echo "Live Site received updates properly" else echo "Failed to push updates to Live Site" fi ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" And then the repo in ../Live/$sitename (these are "bare" repos with working trees added after init) has the basic post-receive: git checkout -f if [ $? -eq 0 ]; then echo "Live site `basename \`pwd\`` checked out successfully" else echo "Live site failed to checkout" fi

    Read the article

  • Java: repetition, overuse -- ?

    - by HH
    I try to be as minimalist as possible. Repetition is a problem. I hate it. When is it really a problem? what is static-overuse? what is field-method overuse? what is class-overuse? are there more types of overuse? Problem A: when it is too much to use of static? private static class Data { private static String fileContent; private static SizeSequence lineMap; private static File fileThing; private static char type; private static boolean binary; private static String name; private static String path; } private static class Print { //<1st LINE, LEFT_SIDE, 2nd LINE, RIGHT_SIDE> private Integer[] printPositions=new Integer[4]; private static String fingerPrint; private static String formatPrint; } Problem B: when it is too much to get field data with private methods? public Stack<Integer> getPositions(){return positions;} public Integer[] getPrintPositions(){return printPositions;} private Stack<String> getPrintViews(){return printViews;} private Stack<String> getPrintViewsPerFile(){return printViewsPerFile;} public String getPrintView(){return printView;} public String getFingerPrint(){return fingerPrint;} public String getFormatPrint(){return formatPrint;} public String getFileContent(){return fileContent;} public SizeSequence getLineMap(){return lineMap;} public File getFile(){return fileThing;} public boolean getBinary(){return binary;} public char getType(){return type;} public String getPath(){return path;} public FileObject getData(){return fObj;} public String getSearchTerm(){return searchTerm;} Related interface overuse

    Read the article

  • adding a class when link is clicked from Wordpress loop

    - by Carey Estes
    I am trying to isolate and add a class to a clicked anchor tag. The tags are getting pulled from a Wordpress loop. I can write JQuery to remove the "static" class, but it is removing the class from all tags in the div rather than just the one clicked and not adding the "active" class. Here is the WP loop <div class="more"> <a class="static" href="<?php bloginfo('template_url'); ?>/work/">ALL</a> <?php foreach ($tax_terms as $tax_term) { echo '<a class="static" href="' . esc_attr(get_term_link($tax_term, $taxonomy)) . '" title="' . sprintf( __( "View all posts in %s" ), $tax_term->name ) . '" ' . '>' . $tax_term->name.'</a>'; } ?> </div> Generates this html: <div class="more"> <a class="static" href="#">ALL</a> <a class="static" href="#">Things</a> <a class="static" href="#"> More Things</a> <a class="static" href="#">Objects</a> <a class="static" href="#">Goals</a> <a class="static" href="#">Books</a> <a class="static" href="#">Drawings</a> <a class="static" href="#">Thoughts</a> </div> JQuery: $("div.more a").on("click", function () { $("a.static").removeClass("static"); $(this).addClass("active"); }); I have reviwed the other similar questions here and here, but neither solution is working for me. Can this be done with JQuery or should I put a click event in the html inline anchor? It looks like it is working just for a second until the page reloads.

    Read the article

  • Setting up a Reverse Proxy using IIS, URL Rewrite and ARR

    Today there was a question in the IIS.net Forums asking how to expose two different Internet sites from another site making them look like if they were subdirectories in the main site. So for example the goal was to have a site: www.site.com expose a www.site.com/company1  and a www.site.com/company2 and have the content from www.company1.com served for the first one and www.company2.com served in the second one. Furthermore we would like to have the responses cached in the server for performance...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The Sitemap Paradox

    - by Jeff Atwood
    We use a sitemap on Stack Overflow, but I have mixed feelings about it. Web crawlers usually discover pages from links within the site and from other sites. Sitemaps supplement this data to allow crawlers that support Sitemaps to pick up all URLs in the Sitemap and learn about those URLs using the associated metadata. Using the Sitemap protocol does not guarantee that web pages are included in search engines, but provides hints for web crawlers to do a better job of crawling your site. Based on our two years' experience with sitemaps, there's something fundamentally paradoxical about the sitemap: Sitemaps are intended for sites that are hard to crawl properly. If Google can't successfully crawl your site to find a link, but is able to find it in the sitemap it gives the sitemap link no weight and will not index it! That's the sitemap paradox -- if your site isn't being properly crawled (for whatever reason), using a sitemap will not help you! Google goes out of their way to make no sitemap guarantees: "We cannot make any predictions or guarantees about when or if your URLs will be crawled or added to our index" citation "We don't guarantee that we'll crawl or index all of your URLs. For example, we won't crawl or index image URLs contained in your Sitemap." citation "submitting a Sitemap doesn't guarantee that all pages of your site will be crawled or included in our search results" citation Given that links found in sitemaps are merely recommendations, whereas links found on your own website proper are considered canonical ... it seems the only logical thing to do is avoid having a sitemap and make damn sure that Google and any other search engine can properly spider your site using the plain old standard web pages everyone else sees. By the time you have done that, and are getting spidered nice and thoroughly so Google can see that your own site links to these pages, and would be willing to crawl the links -- uh, why do we need a sitemap, again? The sitemap can be actively harmful, because it distracts you from ensuring that search engine spiders are able to successfully crawl your whole site. "Oh, it doesn't matter if the crawler can see it, we'll just slap those links in the sitemap!" Reality is quite the opposite in our experience. That seems more than a little ironic considering sitemaps were intended for sites that have a very deep collection of links or complex UI that may be hard to spider. In our experience, the sitemap does not help, because if Google can't find the link on your site proper, it won't index it from the sitemap anyway. We've seen this proven time and time again with Stack Overflow questions. Am I wrong? Do sitemaps make sense, and we're somehow just using them incorrectly?

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >