Search Results

Search found 38393 results on 1536 pages for 'war files'.

Page 80/1536 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • opening offline sync files from a .CAB file

    - by Rob
    OK, I have downloaded from Windows Live Spaces (don't know if this is useful, but might be) a .CAB file containing an Index.XML file and package.cab, package01.cab through to package12.cab. The index.XML simply has names of all the subsequent package.cab files and their offsets. The first package.cab has a single 26MB XML file which appears to be an OfflineSyncFile definition which I am guessing is the meta data for all the other packageXX.cab files. Now the question I have is how should i be going about extracting these things and piecing it all back together again. I have tried WinRAR, which extracts all 800MB for me into unnamed files and randomly named directories. I have also tried the standard extract in Windows Explorer with much the same resusts.

    Read the article

  • Possibility of recovering files from a dd zero-filled hard disk

    - by unknownthreat
    I have "zero filled" (complete wiped) an external hard disk using dd, and from what I have heard: people said you should at least "zero fill" 3 times to be sure that the data are really wiped and no one can recover anything. So I decided to scan the disk once again after I've zero filled the disk. I was expecting the disk to still have some random binary left. It turned out that it has only a few sequential bytes in the very beginning. This is probably the file structure type and other headers stuff. Other than that, it's all zeros and nothing else. So if we have to recover any file from a zero filled disk, ...how? From what I've heard, even you zero fill the disk, you should still have some data left. ...or could dd really completely annihilate all data?

    Read the article

  • decrypting AES files in an apache module?

    - by Tom H
    I have a client with a security policy compliance requirement to encrypt certain files on disk. The obvious way to do this is with Device-mapper and an AES crypto module However the current system is setup to generate individual files that are encrypted. What are my options for decrypting files on-the-fly in apache? I see that mod_ssl and mod_session_crypto do encryption/decryption or something similar but not exactly what I am after. I could imagine that a PerlSetOutputFilter would work with a suitable Perl script configured, and I also see mod_ext_filter so I could just fork a unix command and decrypt the file, but they both feel like a hack. I am kind of surprised that there is no mod_crypto available...or am I missing something obvious here? Presumably resource-wise the perl filter is the way to go?

    Read the article

  • LogParser query to grab only external IP addresses from IIS logs?

    - by Josh
    I'm working on a public website that is used by both external visitors and internal employees. I'm after the external visitor hits, but I can't think of a good way to filter out the internal IP ranges. Using LogParser, what is the best way to filter IISW3C logs by IP range? This is all I've come up with so far, which can't possibly be the best or most efficient way. WHERE [c-ip] NOT LIKE (10.10.%, 10.11.%) Any help is appreciated.

    Read the article

  • How to handle multiple effect files in XNA

    - by Adam 'Pi' Burch
    So I'm using ModelMesh and it's built in Effects parameter to draw a mesh with some shaders I'm playing with. I have a simple GUI that lets me change these parameters to my heart's desire. My question is, how do I handle shaders that have unique parameters? For example, I want a 'shiny' parameter that affects shaders with Phong-type specular components, but for an environment mapping shader such a parameter doesn't make a lot of sense. How I have it right now is that every time I call the ModelMesh's Draw() function, I set all the Effect parameters as so foreach (ModelMesh m in model.Meshes) { if (isDrawBunny == true)//Slightly change the way the world matrix is calculated if using the bunny object, since it is not quite centered in object space { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position + bunnyPositionTransform); } else //If not rendering the bunny, draw normally { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position); } foreach (Effect e in m.Effects) { Matrix ViewProjection = camera.ViewMatrix * camera.ProjectionMatrix; e.Parameters["ViewProjection"].SetValue(ViewProjection); e.Parameters["World"].SetValue(world); e.Parameters["diffuseLightPosition"].SetValue(lightPositionW); e.Parameters["CameraPosition"].SetValue(camera.Position); e.Parameters["LightColor"].SetValue(lightColor); e.Parameters["MaterialColor"].SetValue(materialColor); e.Parameters["shininess"].SetValue(shininess); //e.Parameters //e.Parameters["normal"] } m.Draw(); Note the prescience of the example! The solutions I've thought of involve preloading all the shaders, and updating the unique parameters as needed. So my question is, is there a best practice I'm missing here? Is there a way to pull the parameters a given Effect needs from that Effect? Thank you all for your time!

    Read the article

  • How to Enable IPtables TRACE Target on Debian Squeeze (6)

    - by bernie
    I am trying to use the TRACE target of IPtables but I can't seem to get any trace information logged. I want to use what is described here: Debugger for Iptables. From the iptables man for TRACE: This target marks packes so that the kernel will log every rule which match the packets as those traverse the tables, chains, rules. (The ipt_LOG or ip6t_LOG module is required for the logging.) The packets are logged with the string prefix: "TRACE: tablename:chain- name:type:rulenum " where type can be "rule" for plain rule, "return" for implicit rule at the end of a user defined chain and "policy" for the policy of the built in chains. It can only be used in the raw table. I use the following rule: iptables -A PREROUTING -t raw -p tcp -j TRACE but nothing is appended either in /var/log/syslog or /var/log/kern.log! Is there another step missing? Am I looking in the wrong place? edit Even though I can't find log entries, the TRACE target seems to be set up correctly since the packet counters get incremented: # iptables -L -v -t raw Chain PREROUTING (policy ACCEPT 193 packets, 63701 bytes) pkts bytes target prot opt in out source destination 193 63701 TRACE tcp -- any any anywhere anywhere Chain OUTPUT (policy ACCEPT 178 packets, 65277 bytes) pkts bytes target prot opt in out source destination edit 2 The rule iptables -A PREROUTING -t raw -p tcp -j LOG does print packet information to /var/log/syslog... Why doesn't TRACE work?

    Read the article

  • Lossless cutting of MPEG TS files in Windows

    - by Sebastian P.R. Gingter
    I have several HD video files in transport stream (.ts) format, recorded with my satellite receiver. I want to cut them, as in simply remove a few minutes from the beginning, the end and sometimes a few minutes in the middle of it (remove early start of recordings, late ends and, for some seldom files, the ads). What is a good, ideally but not necessarily free, software with a GUI to do this? Best would be something where you could select points on a timeline and simply cut the elements out. As a resulting file, just the same .ts format would be great, but I could also live with putting the video contents into another container, as long as the video is NOT re-encoded / transcoded. The files have additional audio streams and subtitles. These should be retained in the process. My OS is Windows.

    Read the article

  • How to Run files and open folders as an administrater 11.10

    - by Mattlinux1
    Solved!!! Enable open as administrator in nautilus! To get started, press Ctrl – Alt – T on your keyboard to open Terminal. When it opens, run the commands below: enter code here sudo apt-get install nautilus-gksu After installing that application run copy and paste the line below and press enter. sudo cp /usr/lib/nautilus/extensions-2.0/libnautilus-gksu.so /usr/lib/nautilus/extensions-3.0/ Finally, log out and back in then go and test it, by clicking your right mouse click on the file you want to run as Admin and your see a popup menu with the words: open as administrater. Visit: http://www.liberiangeek.net/2011/12/add-open-as-administrator-to-nautilus-context-menu-in-ubuntu-11-10-oneiric-ocelot/ for help and from here. Enjoy!

    Read the article

  • Apache log - file does not exist

    - by Ivan
    I have quite a few of these in Apache logs piling up every day: [Mon Jun 09 20:42:58 2014] [error] [client 180.153.214.181] File does not exist: /home/user/public_html/ajax.googleapis.com, referer: http://www.mysite.com//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js I have over 200k visitors per day but a few of them like a dozen or so are generating the above error. I can't figure out what may be causing it. Checked the html code and it's all good so I ran out of ideas.

    Read the article

  • CVS gives error files fail to update, but they are updated on the server

    - by Alvin Sim
    We are using CVSNT on machine running Windows XP Pro. After using it for more than 5 years, today we have an issue with CVS. When we try to update a subdirectory in a repository, some of the files failed to update with message "Permission denied" and some files fail with the message "ABC.java is no longer in the repository". But when we login to the CVS machine itself and browse the subdirectory, we can see the files are there. We have checked the Windows folder permissions and they seem correct.

    Read the article

  • Cannot copy files from external USB HDD to computer

    - by Thomas Versteeg
    Hello I have a HDD disk connected with a USB converter to my computer. It consists of two partitions, the first one is mounted automatically and I can grab all the files from it, but the second one I have to mount manually as a root in the command line, if I try to open it with nautilus it gives an error. The drive where the problem is is drive sdb1, sdb2 has the same settings but works fine. I am using Debian Wheezy. This is the fstab: /dev/sdb1 /media/usb0 auto defaults,gid=disk,umask=0777 0 0 /dev/sdb2 /media/usb1 auto defaults,gid=disk,umask=0777 0 0 And when I try to copy the files with this command (as root) cp -vr /media/usb0/* /home/user/Videos/ I get these types of errors: cp: reading `/media/usb0/.lang/file.ext': Permission denied cp: failed to extend `/home/user/Videos/.lang/file.ext': Permission denied How can I at least copy the files to my main HDD? I don't need to adjust them I only need to copy them!

    Read the article

  • May the file size returned by stat be compromised?

    - by codeholic
    I want to make sure that nobody changed a file. In order to accomplish that, I want not only to check MD5 sum of the file, but also check its size, since as far as I understand this additional simple check can sophisticate falsification by several digits. May I trust the size that stat returns? I don't mean if changes were made to stat itself. I don't go that deep. But, for instance, may one compromise the file size that stat returns by hacking the directory file? Or by similar means, that do not require superuser privileges? It's Linux.

    Read the article

  • A compression program that handles files with unusual extensions

    - by ripper234
    WAR files are simply ZIP files with a renamed extension. I'd like to configure a compression program to handle these (on double-clicking the file), but jZip doesn't recognize them unless I rename them to .ZIP. I have setup Windows file associations, but jZip just wants to 'add them to archive' instead of opening them. Which compression program would you recommend?

    Read the article

  • Hosting and consuming WCF services without configuration files

    - by martinsj
    In this post, I'll demonstrate how to configure both the host and the client in code without the need for configuring services i the <system.serviceModel> section of the config-file. In fact, you don't need a  <system.serviceModel> section at all. What you'll do need (and want) sometimes, is the Uri of the service in the configuration file. Configuring the Uri of the the service is actually only needed for the client or when self-hosting, not when hosting in IIS. So, exactly What do we need to configure? The binding type and the binding constraints The metadata behavior Debug behavior You can of course configure even more, and even more if you want to, WCF is after all the king of configuration… As an example I'll be hosting and consuming a service that removes most of the default constraints for WCF-services, using a BasicHttpBinding. Of course, in regards to security, it is probably better to have some constraints on the server, but this is only a demonstration. The ServerConfig class in the code beneath is a static helper class that will be used in the examples. In this post, I’ll be using this helper-class for all configuration, for both the server and the client. In WCF, the  client and the server have both their own WCF-configuration. With this piece of code, they will be sharing the same configuration. 1: public static class ServiceConfig 2: { 3: public static Binding DefaultBinding 4: { 5: get 6: { 7: var binding = new BasicHttpBinding(); 8: Configure(binding); 9: return binding; 10: } 11: } 12:  13: public static void Configure(HttpBindingBase binding) 14: { 15: if (binding == null) 16: { 17: throw new ArgumentException("Argument 'binding' cannot be null. Cannot configure binding."); 18: } 19:  20: binding.SendTimeout = new TimeSpan(0, 0, 30, 0); // 30 minute timeout 21: binding.MaxBufferSize = Int32.MaxValue; 22: binding.MaxBufferPoolSize = 2147483647; 23: binding.MaxReceivedMessageSize = Int32.MaxValue; 24: binding.ReaderQuotas.MaxArrayLength = Int32.MaxValue; 25: binding.ReaderQuotas.MaxBytesPerRead = Int32.MaxValue; 26: binding.ReaderQuotas.MaxDepth = Int32.MaxValue; 27: binding.ReaderQuotas.MaxNameTableCharCount = Int32.MaxValue; 28: binding.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; 29: } 30:  31: public static ServiceMetadataBehavior ServiceMetadataBehavior 32: { 33: get 34: { 35: return new ServiceMetadataBehavior 36: { 37: HttpGetEnabled = true, 38: MetadataExporter = {PolicyVersion = PolicyVersion.Policy15} 39: }; 40: } 41: } 42:  43: public static ServiceDebugBehavior ServiceDebugBehavior 44: { 45: get 46: { 47: var smb = new ServiceDebugBehavior(); 48: Configure(smb); 49: return smb; 50: } 51: } 52:  53:  54: public static void Configure(ServiceDebugBehavior behavior) 55: { 56: if (behavior == null) 57: { 58: throw new ArgumentException("Argument 'behavior' cannot be null. Cannot configure debug behavior."); 59: } 60: 61: behavior.IncludeExceptionDetailInFaults = true; 62: } 63: } Configuring the server There are basically two ways to host a WCF service, in IIS and self-hosting. When hosting a WCF service in a production environment using SOA architecture, you'll be most likely hosting it in IIS. When testing the service in integration tests, it's very handy to be able to self-host services in the unit-tests. In fact, you can share the the WCF configuration for self-hosted services and services hosted in IIS. And that is exactly what you want to do, testing the same configurations for test and production environments.   Configuring when Self-hosting When self-hosting, in order to start the service, you'll have to instantiate the ServiceHost class, configure the  service and open it. 1: // Create the service-host. 2: var host = new ServiceHost(typeof(MyService), endpoint); 3:  4: // Configure the binding 5: host.AddServiceEndpoint(typeof(IMyService), ServiceConfig.DefaultBinding, endpoint); 6:  7: // Configure metadata behavior 8: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 9:  10: // Configure debgug behavior 11: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 12: 13: // Start listening to the service 14: host.Open(); 15:  Configuring when hosting in IIS When you create a WCF service application with the wizard in Visual Studio, you'll end up with bits and pieces of code in order to get the service running: Svc-file with codebehind. A interface to the service Web.config In order to get rid of the configuration in the <system.serviceModel> section, which the wizard has generated for us, we must tell the service that we have a factory that will create the service for us. We do this by changing the markup for the svc-file: 1: <%@ ServiceHost Language="C#" Debug="true" Service="Namespace.MyService" Factory="Namespace.ServiceHostFactory" %> The markup tells IIS that we have a factory called ServiceHostFactory for this service. The service factory has a method we can override which will be called when someone asks IIS for the service. There are overloads we can override: 1: System.ServiceModel.ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses) 2: System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 3:  In this example, we'll be using the last one, so our implementation looks like this: 1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3:  4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12:  1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3: 4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12: As you can see, we are using the same configuration helper we used when self-hosting. Now, when you have a factory, the <system.serviceModel> section of the configuration can be removed, because the section will be ignored when the service has a custom factory. If you want to configure something else in the config-file, one could configure in some other section.   Configuring the client Microsoft has helpfully created a ChannelFactory class in order to create a proxy client. When using this approach, you don't have generate those awfull proxy classes for the client. If you share the contracts with the server in it's own assembly like in the layer diagram under, you can share the same piece of code. The contracts in WCF are the interface to the service and if any, the datacontracts (custom types) the service depends on. Using the ChannelFactory with our configuration helper-class is very simple: 1: var identity = EndpointIdentity.CreateDnsIdentity("localhost"); 2: var endpointAddress = new EndpointAddress(endPoint, identity); 3: var factory = new ChannelFactory<IMyService>(DeployServiceConfig.DefaultBinding, endpointAddress); 4: using (var myService = new factory.CreateChannel()) 5: { 6: myService.Hello(); 7: } 8: factory.Close();   Happy configuration!

    Read the article

  • Unable to log iptables

    - by ActuatedCrayon
    I'm having trouble getting iptables to log to any file. My iptables looks like: Chain INPUT (policy ACCEPT 1366 packets, 433582 bytes) pkts bytes target prot opt in out source destination 869 60656 LOG icmp -- venet0 * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 7 Syslogd is the only log helper running. The default syslog.conf didn't work, so I tried adding "kern.=debug -/var/log/iptables.log". But the file already has "kern.* -/var/log/kern.log". There are recent syslog entries, so it's not a permissions thing. I'm running Ubuntu 12.04.1 with 2.6.32-042stab061.2

    Read the article

  • Parallel downloading of JavaScript files on page load

    - by user359650
    Below is a quote from one of the Yahoo performance pages: While a script is downloading, however, the browser won't start any other downloads, even on different hostnames. When I look at page load of our website, I can see that many scripts are being downloaded at the same time: Am I mistaken, or should the quote should instead read like this? While scripts are downloading (there can be several scripts downloading at the same time), the browser won't start any other downloads, even on different hostnames.

    Read the article

  • IIS FTP - Users Last Logon

    - by Izzy
    How would you determine the last FTP logon time/date for a bunch of local user accounts on a DMZ (standalone/workgroup) server running IIS FTP? I know I could use a log aggregator and sift through it that way, but this server has been operational for approximately 8 years and I don't fancy that vector. I have also tried the scripting route, but this is of no use because the users have never actually logged onto the machine, so there's no profile (rendering the WMI classes *WIN32_UserAccount* and *WIN32_UserProfile* useless). They're just used to access the FTP service. Thanks in advance

    Read the article

  • Downloading multiple files with wget and handling parameters

    - by coure2011
    How can I download multiple files using wget? I also want to rename the files. Here are the commands I'm running one by one (copy/paste on terminal): wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720774/PS11.rar -O part11.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812721094/PS12.rar -O part12.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720804/PS13.rar -O part13.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720854/PS14.rar -O part14.rar ........ and so on.. What can I do to download all these files one by one?

    Read the article

  • Outlook 2010 - Export of an Exchange OST to PST creates files with different sizes each time

    - by Jiri Pik
    This is a most weird issue. I have a couple of exchange OST mailboxes, and just for security, I am exporting them using File / Import / Export to a file / Export to PST file. If I run the export consecutively, it always creates files with different file sizes, WITH NO ERROR OR WARNING that something went wrong. The files should be of the same size as you run it right after the previous backup finished. I found out that if the filesize is substantially lower, then a reboot and back up can fix this up. What's your insight into this problem? What could cause that the files have different sizes and what could have caused that there is no warning? I suspected some Windows Search issue as sometimes the backup fails with a dialog error stating that Windows Search terminated the export.

    Read the article

  • Best way to compare (diff) a full directory structure?

    - by Adam Matan
    Hi, What's the best way to compare directory structures? I have a backup utility which uses rsync. I want to tell the exact differences (in terms of file sizes and last-changed dates) between the source and the backup. Something like: Local file Remote file Compare /home/udi/1.txt (date)(size) /home/udi/1.txt (date)(size) EQUAL /home/udi/2.txt (date)(size) /home/udi/2.txt (date)(size) DIFFERENT Of course, the tool can be ready-made or an idea for a python script. Many thanks! Udi

    Read the article

  • Copying and pasting files. Nothing new here

    - by Blake Wood
    I installed XBMC and it's working beautifully. I have my own skin I'd like to use and all I should have to do is copy and past my skin.folder into the addons folder and I'm down. However, this isn't so easy with Ubuntu. I have the latest version, installed today 11-14-2012 Could someone please spell out the command process to make this happen? I've read through so many forums, tried ssh, I know now I'm getting into things that could be dangerous so any help would be much appreciated. Structure. /home/cantrellsmedia/Downloads/skin.cantrell <----- Need copied /usr/share/xbmc/addons/ <-------- Paste Some of what I have tried cp skin.cantrell mv skin.cantrell~/usr/share/xbmc/addons/

    Read the article

  • make IIS 7.5 cache static content files over diferent pages

    - by Achilles
    On a Windows 2008 R2, using DNS and IIS I've established my development test server; i.e. I'll have a web application that I can browse on http://test.dev I've moved all the static content files like images, js files and css files into another application which is visible on http://cdn.test.dev test.dev, uses cdn.test.dev urls like http://cdn.test.dev/js/jquery.js to load js, css and images. When I first load "~/" of test.dev, all files will load with a response code of 200; when I press F5 in Firefox, all files, except the "~/default.aspx", will load with 304 response code; but pressing Ctrl+F5 loads them again with a 200 code; if I browse another url like "~/pages/" in test.dev, all of those static files will reload with a 200 code... Is this normal or I'm doing something wrong? Actually, I'm looking for a behavior like this: I want the client to load http://cdn.test.dev/js/jquery.js, only once. I want the client's browser to use this jquery.js file, from cache, in all other pages of test.dev Is this possible? This is the web.config file I have in the root directory of cdn.test.dev: <configuration> <system.webServer> <caching> <profiles> <add extension=".png" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".gif" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".jpg" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".js" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".css" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".axd" kernelCachePolicy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> </profiles> </caching> <httpProtocol allowKeepAlive="true"> <customHeaders> <add name="Cache-Control" value="public, max-age=31536000" /> </customHeaders> </httpProtocol> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true"> <remove name="RadUploadModule" /> <remove name="RadCompression" /> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" preCondition="integratedMode" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" preCondition="integratedMode" /> </modules> <handlers> <remove name="ChartImage_axd" /> <remove name="Telerik_Web_UI_SpellCheckHandler_axd" /> <remove name="Telerik_Web_UI_DialogHandler_aspx" /> <remove name="Telerik_RadUploadProgressHandler_ashx" /> <remove name="Telerik_Web_UI_WebResource_axd" /> <add name="ChartImage_axd" path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_SpellCheckHandler_axd" path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_DialogHandler_aspx" path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_RadUploadProgressHandler_ashx" path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_WebResource_axd" path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" preCondition="integratedMode" /> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10485760" /> </requestFiltering> </security> <staticContent> <clientCache cacheControlMode="UseExpires" httpExpires="Wed, 01 Jan 2020 00:00:00 GMT"/> </staticContent> </system.webServer> <appSettings /> <system.web> <compilation debug="false" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="telerik" namespace="Telerik.Web.UI" assembly="Telerik.Web.UI" /> </controls> </pages> <httpHandlers> <add path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" validate="false" /> <add path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" validate="false" /> </httpHandlers> <httpModules> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" /> </httpModules> <httpRuntime maxRequestLength="10240" /> </system.web> </configuration> and this is the resulting response header for http://cdn.test.dev/css/global.css: Cache-Control: private,public, max-age=31536000 Content-Type: text/css Content-Encoding: gzip Expires: Wed, 01 Jan 2020 00:00:00 GMT Last-Modified: Mon, 06 Sep 2010 08:53:06 GMT Accept-Ranges: bytes Etag: "0454eca04dcb1:0" Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 06 Sep 2010 14:57:08 GMT Content-Length: 4495

    Read the article

  • Save programmatically created Mesh to .X Files using SlimDX throw null exception

    - by zionpi
    Mesh has been created properly using SlimDX,but when I use the following line: Mesh.ToXFile(barMesh, "foo.x", XFileFormat.Text,CharSet.Unicode); It throws NullReferenceException,through monitor window I can see barMesh is not null, inside the mesh structrue, SkinInfo is null. If SkinInfo is the problem,then how can I initialize it properly?Internet doesn't seems have much information on this.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >