Search Results

Search found 39200 results on 1568 pages for 'zip files'.

Page 170/1568 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Access denied when trying to open files/folders after reinstall [closed]

    - by user711532
    Possible Duplicate: Access Denied when saving a file in Windows 7 I installed Windows 7 fresh on a new machine. Now when I unarchive (winrar or 7z etc. ) to Program files (x86) (for example), access denied. Even if I copy a file to a folder I installed an app to it is still access denied. I checked the security, it looks like full control is given to the creator - this is weird as I never ran across this before (same version of Windows 7 - its just a fresh install after some new hardware). It is the same effect as if I was editing the hosts file, and you do not use "run as admin" you will not be able to save it, yo will have to save it somewhere else. This "file copy" issue I ran into is the same. I could change all these permissions, however this is something I never had to do before. I am the admin, why did the install, not give me "full control"? How can this be globally fixed. I cannot change the permissions - they are greyed out - so that is weird as well. If I was a standard reason, It would make sense, however, again, I am the admin.

    Read the article

  • Web Site Performance and Assembly Versioning

    - by capgpilk
    I originally wanted to write this post in one, but there is quite a large amount of information which can be broken down into different areas, so I am going to publish it in three posts. Minification and Concatination of JavaScript and CSS Files – this post Versioning Combined Files Using Subversion – published shortly Versioning Combined Files Using Mercurial – published shortly Website Performance There are many ways to improve web site performance, two areas are reducing the amount of data that is served up from the web server and reducing the number of files that are requested. Here I will outline the process of minimizing and concatenating your javascript and css files automatically at build time of your visual studio web site/ application. To edit the project file in Visual Studio, you need to first unload it by right clicking the project in Solution Explorer. I prefer to do this in a third party tool such as Notepad++ and save it there forcing VS to reload it each time I make a change as the whole process in Visual Studio can be a bit tedious. Now you have the project file, you will notice that it is an MSBuild project file. I am going to use a fantastic utility from Microsoft called Ajax Minifier. This tool minifies both javascript and css. 1. Import the tasks for AjaxMin choosing the location you installed to. I keep all third party utilities in a Tools directory within my solution structure and source control. This way I know I can get the entire solution from source control without worrying about what other tools I need to get the project to build locally. 1: <Import Project="..\Tools\MicrosoftAjaxMinifier\AjaxMin.tasks" /> 2. Now create ItemGroups for all your js and css files like this. Separating out your non minified files and minified files. This can go in the AfterBuild container. 1: <Target Name="AfterBuild"> 2:  3: <!-- Javascript files that need minimizing --> 4: <ItemGroup> 5: <JSMin Include="Scripts\jqModal.js" /> 6: <JSMin Include="Scripts\jquery.jcarousel.js" /> 7: <JSMin Include="Scripts\shadowbox.js" /> 8: </ItemGroup> 9: <!-- CSS files that need minimizing --> 10: <ItemGroup> 11: <CSSMin Include="Content\Site.css" /> 12: <CSSMin Include="Content\themes\base\jquery-ui.css" /> 13: <CSSMin Include="Content\shadowbox.css" /> 14: </ItemGroup>   1: <!-- Javascript files to combine --> 2: <ItemGroup> 3: <JSCat Include="Scripts\jqModal.min.js" /> 4: <JSCat Include="Scripts\jquery.jcarousel.min.js" /> 5: <JSCat Include="Scripts\shadowbox.min.js" /> 6: </ItemGroup> 7: <!-- CSS files to combine --> 8: <ItemGroup> 9: <CSSCat Include="Content\Site.min.css" /> 10: <CSSCat Include="Content\themes\base\jquery-ui.min.css" /> 11: <CSSCat Include="Content\shadowbox.min.css" /> 12: </ItemGroup>   3. Call AjaxMin to do the crunching. 1: <Message Text="Minimizing JS and CSS Files..." Importance="High" /> 2: <AjaxMin JsSourceFiles="@(JSMin)" JsSourceExtensionPattern="\.js$" 3: JsTargetExtension=".min.js" JsEvalTreatment="MakeImmediateSafe" 4: CssSourceFiles="@(CSSMin)" CssSourceExtensionPattern="\.css$" 5: CssTargetExtension=".min.css" /> This will create the *.min.css and *.min.js files in the same directory the original files were. 4. Now concatenate the minified files into one for javascript and another for css. Here we write out the files with a default file name. In later posts I will cover versioning these files the same as your project assembly again to help performance. 1: <Message Text="Concat JS Files..." Importance="High" /> 2: <ReadLinesFromFile File="%(JSCat.Identity)"> 3: <Output TaskParameter="Lines" ItemName="JSLinesSite" /> 4: </ReadLinesFromFile> 5: <WriteLinestoFile File="Scripts\site-script.combined.min.js" Lines="@(JSLinesSite)" 6: Overwrite="true" /> 7: <Message Text="Concat CSS Files..." Importance="High" /> 8: <ReadLinesFromFile File="%(CSSCat.Identity)"> 9: <Output TaskParameter="Lines" ItemName="CSSLinesSite" /> 10: </ReadLinesFromFile> 11: <WriteLinestoFile File="Content\site-style.combined.min.css" Lines="@(CSSLinesSite)" 12: Overwrite="true" /> 5. Save the project file, if you have Visual Studio open it will ask you to reload the project. You can now run a build and these minified and combined files will be created automatically. 6. Finally reference these minified combined files in your web page. In the next two posts I will cover versioning these files to match your assembly.

    Read the article

  • How to associate document files with MS Office 2010 Beta?

    - by Semyon Perepelitsa
    I've installed MS Office 2010 Beta (OneClick technology). All apps launch from 1 program, Word for example has this link: "C:\Program Files (x86)\Common Files\microsoft shared\Virtualization Handler\CVH.EXE" "Microsoft Word 2010 (Beta) 2014006204190000" Or OneNote: "C:\Program Files (x86)\Common Files\microsoft shared\Virtualization Handler\CVH.EXE" "Microsoft OneNote 2010 (Beta) 2014006204190000" Because of that I can't associate files with Office programs in file properties, they actually associate with “Microsoft Office Client Virtualization Handler” (CVH.EXE). Anyone know another way to do that?

    Read the article

  • Creating a USB stick for installing centos 6.x using DVD1 and DVD2 iso files

    - by user250563
    First, we create 2 partitions on the USB stick that is let's say 16GB. first partition is let's say only 1GB and the second partition is the rest of what is available. after we "w" write the changes, the USB now has 2 partitions. 1 is 1GB 1 is more than 14GB so , we have... sdb1 and sdb2 now. now we need to turn these partitions into filesystems some say i should run these commands after those procedures. mkfs.vfat -F 32 /dev/sdb1 mkfs.ext3 /dev/sdb2 but some web pages recommend using: mkfs.vfat -n BOOT /dev/sdb1 mkfs.ext2 -m 0 -b 4096 -L DATA /dev/sdb2 which is it? so let's say the DVDs are called: CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso so we make a directory: mkdir -p /mnt/dvd1 and then mount it: mount -o loop CentOS-6.4-x86_64-bin-DVD1.iso /mnt/dvd1 and i suppose we don't make a directory for dvd2 and we don't have to mount it ? at this point i do not know what should be done. but i think this step might be next: we make the USB bootable by finding the file named mbr.bin and then moving it to there via these commnad. dd conv=notrunc bs=440 count=1 if=/usr/lib/syslinux/mbr.bin of=/dev/sdb parted /dev/sdb set 1 boot on in other words we are "dd-ing it to 'sdb' not sdb1' or 'sdb2'. and then we use parted to set the boot to on for sdb so far everything looks good? here is the confusing parts.. how exactly do i move these iso files to the usb drive? EVERYTHING BELOW IS A GUESS. so at this point i should copy the folder /mnt/dvd1/isolinux to usb's sdb1 or sdb2 ? rename it to syslinux ? and then inside this syslinux folder there will be a file called... isolinux.cfg ? which should be renamed to syslinux.cfg ? and then copy the contents of /mnt/dvd1/images/* to USB's sdb2 ? but i think i am also suppose to copy and paste the both CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso somewhere into this USB's sdb2 partition, correct ? almost like a drag and drop kind of a thing? or do they go into any folders ? centos' own web site has some instructions but those instructions do not work. http://wiki.centos.org/HowTos/InstallFromUSBkey i once got this working but things got ruined, i have to do it again and this time take notes.

    Read the article

  • How do I make IE8 NOT delete temporary internet files?

    - by Josh
    Every time I close Internet Explorer, all temporary files (including cookies) are deleted. IE has a setting for this (Tools Internet Options Advanced Security Empty Temporary Internet Files folder when browser is closed) but the setting is turned off. I tried cycling it on and off again, with no luck. I can open the Temporary Internet Files folder and watch all the files vanish each time IE closes. How can I get the temporary files to stay where they belong?

    Read the article

  • HTTP Content-type header for cached files

    - by Brian
    Hello, Using Apache with mod_rewrite, when I load a .css or .js file and view the HTTP headers, the Content-type is only set correctly the first time I load it - subsequent refreshes are missing Content-type altogether and it's creating some problems for me. Specifically, gzip is not compressing these files. I can get around this by appending a random query string value to the end of each filename, eg. http://www.site.com/script.js?12345 However, I don't want to have to do that, since caching is good and all I want is for the Content-type to be present. I've tried using a RewriteRule to force the type but still didn't solve the problem. Any ideas? Thanks, Brian More Details: HTTP headers WITHOUT random query string value: http://localhost/script.js GET /script.js HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; If-Modified-Since: Thu, 29 Apr 2010 15:49:56 GMT If-None-Match: "3440e9-119ed-485621404f100" Cache-Control: max-age=0 HTTP/1.1 304 Not Modified Date: Thu, 29 Apr 2010 20:19:44 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Connection: Keep-Alive Keep-Alive: timeout=5, max=100 Etag: "3440e9-119ed-485621404f100" Vary: Accept-Encoding X-Pad: avoid browser bug HTTP headers WITH random query string value: http://localhost/script.js?c947344de8278053f6edbb4365550b25 GET /script.js?c947344de8278053f6edbb4365550b25 HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 20:14:40 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Last-Modified: Thu, 29 Apr 2010 15:49:56 GMT Etag: "3440e9-119ed-485621404f100" Accept-Ranges: bytes Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 24605 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: application/javascript

    Read the article

  • Why do my files fail checksums when transferring to esata dive but not when transfered off the external?

    - by R. Peterson
    I have been using teracopy to verify and the large movie files show artifacting after transfer and fail checksum, small files do not fail, only large files seem to. When I transfer files off the external hard drive no such failure of checksums occur on any of the files, could this be a bad cable, or maybe a bad external interface or esata interface on my computer, I have tried two interfaces for esata, one with a pci card the other on the motherboard, both with similar results, so what maybe the reason if a bad hard drive or external case maybe the problem?

    Read the article

  • Unable to locate Windows Server Error log files

    - by Sam007
    I am getting an error in my application on 500 Internal Server Error. The firebug gives me, NetworkError: 500 Internal Server Error - http://webgis.arizona.edu/ArcGIS/rest/services/webGIS/Shock_Models/GPServer/Income_Log/jobs/jc09c501156564f71abc5d98393581267/results/final_shp?dpi=96&transparent=true&format=png8&imageSR=102100&f=image&bbox=%7B%22xmin%22%3A-14519891.438356264%2C%22ymin%22%3A637618.0139790997%2C%22xmax%22%3A-6692739.741956295%2C%22ymax%22%3A6507981.786279075%2C%22spatialReference%22%3A%7B%22wkid%22%3A102100%7D%7D&bboxSR=102100&size=800%2C600 And when I goto that particular link, I get this error, Server Error - Object reference not set to an instance of an object. Any idea how I can correct it? UPDATE I was told to use fiddler and see the error details that is occurring over the network and this is the output that I got, SESSION STATE: Done. Response Entity Size: 849 bytes. == FLAGS ================== BitFlags: [ClientPipeReused, ServerPipeReused] 0x18 X-CLIENTPORT: 2010 X-RESPONSEBODYTRANSFERLENGTH: 849 X-EGRESSPORT: 2023 X-HOSTIP: 128.196.53.161 X-PROCESSINFO: firefox:2248 X-CLIENTIP: 127.0.0.1 X-SERVERSOCKET: REUSE ServerPipe#2 == TIMING INFO ============ ClientConnected: 15:53:51.383 ClientBeginRequest: 15:53:51.494 GotRequestHeaders: 15:53:51.494 ClientDoneRequest: 15:53:51.494 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 15:52:45.077 FiddlerBeginRequest: 15:53:51.495 ServerGotRequest: 15:53:51.495 ServerBeginResponse: 15:53:51.679 GotResponseHeaders: 15:53:51.679 ServerDoneResponse: 15:53:51.679 ClientBeginResponse: 15:53:51.679 ClientDoneResponse: 15:53:51.679 Overall Elapsed: 00:00:00.1850106 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2] * Note: Data above shows WinINET's current cache state, not the state at the time of the request. * Note: Data above shows WinINET's Medium Integrity (non-Protected Mode) cache only. But I am still confused as to what the error is? This is the application. I am not sure if the error is due to the ArcGIS-Server or the Windows Server 2008. I am new on working with the Windows Server and wanted to know where can I look for the error lof files? This is the link which gives the details and the log info of the job executed. This is the output.

    Read the article

  • Alternative Windows Offline Files + Windows Backup + Previous Version Setup

    - by Herson
    Currently our documents are all hosted in a Windows 7 box. Users can access the files using Windows share and the documents are available offline (windows 7 feature). The documents are being backed up daily by Windows 7 backup and restore utility. Users can access previous versions of the file (from the backups) using Windows Explorer "previous versions" feature. This setup is currently working well, except for the following: We would prefer to have access to hourly versions of the file, not daily. The previous version mechanism is tied up to the backup mechanism. Windows 7 performs a full backup every week and incremental backup everyday. The previous versions of a file is actually what are the available in the backups. If you 20GB documents and want to maintain at least three(3) year history, you will use at minimum 3 years * 52 weeks * 20GB or about 3TB even if there are few changes in the documents. Its pretty inefficient use of space. Looking up previous versions of a file is very slow (tens of minutes). This is probably related to the previous issue - Windows has to traverse its all of its backups. I am considering using SVN + autocommit/autoupdate tortoisesvn. It will have the following advantages: Backups are easy and will also backup the whole history of each documents. (Just backup the repository). Creating previous versions can be frequent. I think svn commit / update can be done every two minutes or so. Users can sync over the net. However, I can see the following issues: More conflicts than the original setup because both multiple users can now edit the same file even both are online, i.e. can connect to the SVN repo. The users can off course lock the file first before editing, but that would mean they have to adjust. Delay on propagation of file changes. On windows 7 file sharing, changes made by one online user will be instantaneously available to other online users. With the SVN setup, changes will only be propagated when the users execute the svn add/commit/update sequence. Delay will be probably a few minutes. This workflow will no longer work: "Hi, I just edited document X, can you have a quick look?" I would like to ask the opinion of the community for alternative setups, or improvements on the above setups to work out the kinks.

    Read the article

  • Serving protected files using Nginx's X-Accel-Redirect header

    - by andybak
    I'm trying to serve protected files using this directive in my nginx.conf: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/secure/; } I'm passing in paths in the form: "/myfile.doc" and the file's path would be: /home/ldr/webapps/nginx/app/secure/myfile.doc I just get 404's when I access "http: //myserver/secure/myfile.doc" (space inserted after http to stop ServerFault converting it to a link) I've tried taking the trailing / off the location directive and that makes no difference. Two questions: How do I fix it! How can I debug problems like this myself? How can I get Nginx to report which path it's looking for? error.log shows nothing and access.log just tells me which url is being requested - this is the bit I already know! It's no fun trying things randomly without any feedback. Here's my entire nginx.conf: daemon off; worker_processes 2; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 21534; server_name my.server.com; client_max_body_size 5m; location /media/ { alias /home/ldr/webapps/nginx/app/media/; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; fastcgi_pass unix:/home/ldr/webapps/nginx/app/myproject/django.sock; fastcgi_pass_header Authorization; fastcgi_hide_header X-Accel-Redirect; fastcgi_hide_header X-Sendfile; fastcgi_intercept_errors off; include fastcgi_params; } location /secure { internal; alias /home/ldr/webapps/nginx/app/secure/; } } } EDIT: I'm trying some of the suggestions here So I've tried: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/; } both with and without the trailing slash on location. I've also tried moving this block before the "location /" directive. The page I linked to has ^~ after 'location' giving: location ^~ /secure/ { ...etc... Not sure what that signifies but it didn't work either!

    Read the article

  • How to check there are no html files in current directory?

    - by kev
    I have a script which will download html files into current directory. Then it'll generate a report based on these html files. At last, it'll delete all these html files. So, when I run this script, I want to make sure there is no html files in current dir. This is what I got: if ls *.html >/dev/null 2>&1; then echo 'clear HTML files first' exit fi Is there any easy way to check?

    Read the article

  • SCons does not clean all files

    - by meowsqueak
    I have a file system containing directories of "builds", each of which contains a file called "build-info.xml". However some of the builds happened before the build script generated "build-info.xml" so in that case I have a somewhat non-trivial SCons SConstruct that is used to generate a skeleton build-info.xml so that it can be used as a dependency for further rules. I.e.: for each directory: if build-info.xml already exists, do nothing. More importantly, do not remove it on a 'scons --clean'. if build-info.xml does not exist, generate a skeleton one instead - build-info.xml has no dependencies on any other files - the skeleton is essentially minimal defaults. during a --clean, remove build-info.xml if it was generated, otherwise leave it be. My SConstruct looks something like this: def generate_actions_BuildInfoXML(source, target, env, for_signature): cmd = "python '%s/bin/create-build-info-xml.py' --version $VERSION --path . --output ${TARGET.file}" % (Dir('#').abspath,) return cmd bld = Builder(generator = generate_actions_BuildInfoXML, chdir = 1) env.Append(BUILDERS = { "BuildInfoXML" : bld }) ... # VERSION = some arbitrary string, not important here # path = filesystem path, set elsewhere build_info_xml = "%s/build-info.xml" % (path,) if not os.path.exists(build_info_xml): env.BuildInfoXML(build_info_xml, None, VERSION = build) My problem is that 'scons --clean' does not remove the generated build-info.xml files. I played around with env.Clean(t, build_info_xml) within the 'if' but I was unable to get this to work - mainly because I could not work out what to assign to 't' - I want a generated build-info.xml to be cleaned unconditionally, rather than based on the cleaning of another target, and I wasn't able to get this to work. If I tried a simple env.Clean(None, "build_info_xml") after but outside the 'if' I found that SCons would clean every single build-info.xml file including those that weren't generated. Not good either. What I'd like to know is how SCons goes about determining which files should be cleaned and which should not. Is there something funny about the way I've used a generator function that prevents SCons from recording this target as a Clean candidate?

    Read the article

  • WiX 3: Using heat.exe to add bulk files to a new WiX project: HEAT5150

    - by Karen Kwong
    If this is a repeat question, please direct me to the existing solution. I wasn't able to find a matching query. We currently use InstallShield. I'm attempting to covert a project with 407 files to a WiX3 installation package. I tried using heat.exe to do some of the automation but I get the following warning for almost every file: c: heat dir "c:\projectDir\projectA" -gg -ke -template:Product -out "c:\install\projectA\heatOutput" heat.exe: warning HEAT5150 : Could not harvest data from a file that was expected to be a SelfReg DLL: c:\projectDir\projectA\plugin1.dll. If this file does not support SelfReg you can ignore this warning. Otherwise, this error detail may be helpful to diagnose the failure: Unable to load file: c:\projectDir\projectA\plugin1.dll, error: 126. Q: Is it normal for this warning to be reported for every file? If there's a current "How To create/convert to your first WiX install project with many files" tutorial, please point me to it. The key requirement is "with many files". Thank-you -Karen Kwong- PS. I know that WiX is designed for incremental install project creation but it would be nice to know if there's an automated way to convert existing install projects.

    Read the article

  • Using .NET's HttpWebRequest to download a multitude of files in a row

    - by Cornelius
    I have an application that needs to download several files in a row in succession (sometimes a few thousand). However, what ends up happening when several files need to be downloaded is I get an exception with an inner exception of type SocketException and the error code 10048 (WSAEADDRINUSE). I did some digging and basically it's because the server has run out of sockets (and they are all waiting for 240s or so before they become available again) - not coincidentally it starts happening around the 1024 file range. I would expect that the HttpWebRequest/ServicePointManager would be reusing my connection, but apparently it is not (and the files are https, so that may be part of it). I never saw this problem in the C++ code that this was ported from (but that doesn't mean it didn't ever happen - I'd be surprised if it was, though). I am properly closing the WebRequest object and the HttpWebRequest object has KeepAlive set to true by default. Next my intent is to fiddle around with ServicePointManager.SetTcpKeepAlive(). However, I can't see how more people haven't run into this problem. Has anyone else run into the problem, and if so, what did you do to get around it? Currently I have a retry scheme that detects this error and waits it out, but that doesn't seem like the right thing to do. Here's some basic code to verify what I'm doing (just in case I'm missing closing something): WebRequest webRequest = WebRequest.Create(uri); webRequest.Method = "GET"; webRequest.Credentials = new NetworkCredential(username, password); WebResponse webResponse = webRequest.GetResponse(); try { using(Stream stream = webResponse.GetResponseStream()) { // read the stream } } finally { webResponse.Close() }

    Read the article

  • Writing files in App_Data causes tempdata to be null

    - by RAMX
    I have a small asp.net MVC 1 web app that can store files and create directories in the App_Data directory. When the write operation succeeds, I add a message to the tempdata and do a redirectToRoute. The problem is that the tempdata is null when the action is executed. If i write the files in a directory outside of the web applications root directory, the tempdata is not null and everything works correctly. Any ideas why writing in the app_data seems to clear the tempdata ? edit: if DRS.Logic.Repository.Manager.CreateFile(path, hpf, comment) writes in the App_Data, TempData will be null in the action being redirected to. if it is a directory out of the web app root it is fine. No exceptions are being thrown. [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(int id, string path, FormCollection form) { ViewData["path"] = path; ViewData["id"] = id; HttpPostedFileBase hpf; string comment = form["FileComment"]; hpf = Request.Files["File"] as HttpPostedFileBase; if (hpf.ContentLength != 0) { DRS.Logic.Repository.Manager.CreateFile(path, hpf, comment); TempData["notification"] = "file was created"; return RedirectToRoute(new { controller = "File", action ="ViewDetails", id = id, path = path + Path.GetFileName(hpf.FileName) }); } else { TempData["notification"] = "No file were selected."; return View(); } }

    Read the article

  • Fluent config not generating mapping files

    - by rboarman
    Hello, I am trying to get Fluent nHibernate to generate mappings so I can take a look at the files and the sql. My code is based on this post and on what I can glean from the documentation. http://stackoverflow.com/questions/1375146/fluent-mapping-entities-and-classmaps-in-different-assemblies I am using the latest code from git. Here’s my config code: Configuration cfg = new Configuration(); var ft = Fluently.Configure(cfg); //DbConnection by fluent ft.Database ( MsSqlConfiguration .MsSql2008 .ConnectionString("……") .ShowSql() .UseReflectionOptimizer() ); //get mapping files. ft.Mappings(m => { //set up the mapping locations m.FluentMappings.AddFromAssemblyOf<Entity>() .ExportTo(@"C:\temp"); m.Apply(cfg); }); I also tried: var sessionFactory = Fluently.Configure() .Database(MsSqlConfiguration .MsSql2008 .ShowSql() .ConnectionString(“……")) .Mappings(p => p.FluentMappings .AddFromAssemblyOf<Entity>() .ExportTo(@"c:\temp\")) .BuildSessionFactory(); I have verified that the connection string is correct. The issue is that no mapping files show up in the ExportTo folder and no sql code shows up in the output window or in the log file. No errors or exceptions are generated either. I have no idea where to go from here. Thank you in advance. Rick

    Read the article

  • evaluation of a java thread dump

    - by raticulin
    I got a thread dump of one of my processes. It has a bunch of these threads. I guess they are keeping a bunch of memory so I am getting OOM. "Thread-8264" prio=6 tid=0x4c94ac00 nid=0xf3c runnable [0x4fe7f000] java.lang.Thread.State: RUNNABLE at java.util.zip.Inflater.inflateBytes(Native Method) at java.util.zip.Inflater.inflate(Inflater.java:223) - locked <0x0c9bc640 (a java.util.zip.Inflater) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.java:235) at com.my.ZipExtractorCommonsCompress.extract(ZipExtractorCommonsCompress.java:48) at com.my.CustomThreadedExtractorWrapper$ExtractionThread.run(CustomThreadedExtractorWrapper.java:151) Locked ownable synchronizers: - None "Thread-8241" prio=6 tid=0x4c94a400 nid=0xb8c runnable [0x4faef000] java.lang.Thread.State: RUNNABLE at java.util.zip.Inflater.inflateBytes(Native Method) at java.util.zip.Inflater.inflate(Inflater.java:223) - locked <0x0c36b808 (a java.util.zip.Inflater) at org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.java:235) at com.my.ZipExtractorCommonsCompress.extract(ZipExtractorCommonsCompress.java:48) at com.my.CustomThreadedExtractorWrapper$ExtractionThread.run(CustomThreadedExtractorWrapper.java:151) Locked ownable synchronizers: - None I am trying to find out how it arrived to this situation. CustomThreadedExtractorWrapper is a wrapper class that fires a thread to do some work (ExtractionThread, which uses ZipExtractorCommonsCompress to extract zip contents from a compressed stream). If the task is taking too long, ExtractionThread.interrupt(); is called to cancel the operation. I can see in my logs that the cancellation happened 25 times. And I see 21 of these threads in my dump. My questions: What is the status of these threads? Alive and running? Blocked somehow? They did not die with .interrupt() apparently? Is there a sure way to really kill a thread? What does really mean 'locked ' in the stack trace? Line 223 in Inflater.java is: public synchronized int inflate(byte[] b, int off, int len) { ... //return is line 223 return inflateBytes(b, off, len); }

    Read the article

  • SSIS package not picking up configuration files correctly

    - by blntechie
    We have recently migrated some 30 DTS packages in SQL Server 2000 to SSIS packages in SQL Server 2008. We have created packages in such a way that all environment related variables and other required information are picked from configuration files for maintenance of packages across different environments like Dev, QA and Prod. After setting up all the packages with the config files, when we tested the packages from Business Intelligence Development Studio, it worked fine and picked up the values from the configuration file. And when changing the values in config files to Dev or other env it correctly picked up the values and executed. Similarly, tried for 2 different environments and the packages worked fine. So we deployed to Prod and it was working fine. Yesterday, I had to make a functional change for one package and so I made the change in the package (it is just changing a parameter in a SQL procedure execution task and not related to any variables) and tested in BIDS with 2 environments and it worked fine. As the change was not related to any environment change, we deployed only the updated package (not the associated config file) manually in Prod (i.e without the use of manifest). The config file which was used by the package previously and working fine in Prod remained unaltered. But when the package was executed, the package was pointing to QA and the package didn't read from the config file I believe. One reason may be, it is still using the last executed values which remains in the .dtsx file(can be checked by opening the file in a text editor) usually. But normally, when a package is executed, the values will be overwritten from config file. Guess it is not happening. What are the possible reasons for this? We have tested extensively switching between test environments and it does not show this behavior. We have encountered this in Prod environment twice now. Anyone else have experienced this and how have you resolved this?

    Read the article

  • Cron Job on Ubuntu Hardy Executing But Not Deleting Files As Expected

    - by Patrick McKenzie
    I have a bit of a pickle here and wonder if anyone can give me some pointers: I have a cron job which executes for a particular user daily and is supposed to sweep files in a particular directory. Technically, it is two jobs. I've turned on cron.log to verify they're actually executing, and they are: May 24 11:03:01 AppNameGoesHere /USR/SBIN/CRON[11257]: (mongrel_AppNameGoesHere) CMD (rm -rf /var/www/apps/AppNameGoesHere/current/public/ {popular,index,purchasing,purchasing-alternate,support,about-us,guarantee,screenshots}.htm{,l}) May 24 11:04:01 AppNameGoesHere /USR/SBIN/CRON[11260]: (mongrel_AppNameGoesHere) CMD (rm -rf /var/www/apps/AppNameGoesHere/current/public/ {stats,popular,bcf,articles,expenses}) I have removed the actual usernames and formatted it so that it is less ugly on StackOverflow. Now, my question: Despite the fact that I can see these deletions executing and apparently succeeding in the log, if I go to the specified directory, the files are still there. I initially suspected permission hijinx were going on, but I've verified that I can delete the files manually by su-ing into the mongrel_AppNameGoesHere user and issuing individual rm commands or by copy/pasting the cron job to the command line. Anything that I don't manually zap stays unzapped despite days of that cron job executing successfully. Any suggestions on to what might be happening? I was previously using Dapper Drake with these cron jobs in the /etc/crontab file directly, and when I upgraded to Hardy I moved them to user-specific crontabs (via sudo crontab -e - u mongrel_AppNameGoesHere), which was the point where they appear to have stopped working.)

    Read the article

  • How to exclude R*.class files from a proguard build

    - by Jeremy Bell
    I am one step away from making the method described here: http://stackoverflow.com/questions/2761443/targeting-android-with-scala-2-8-trunk-builds work with a single project (vs one project for scala and one for android). I've come across a problem. Using this input file (arguments to) proguard: -injars bin;lib/scala-library.jar(!META-INF/MANIFEST.MF,!library.properties) -outjar lib/scandroid.jar -libraryjars lib/android.jar -dontwarn -dontoptimize -dontobfuscate -dontskipnonpubliclibraryclasses -dontskipnonpubliclibraryclassmembers -keepattributes Exceptions,InnerClasses,Signature,Deprecated, SourceFile,LineNumberTable,*Annotation*,EnclosingMethod -keep public class org.scala.jeb.** { public protected *; } -keep public class org.xml.sax.EntityResolver { public protected *; } Proguard successfully builds scandroid.jar, however it appears to have included the generated R classes that the android resource builder generates and compiles. In this case, they are located in bin/org/jeb/R*.class. This is not what I want. The android dalvik converter cannot build because it thinks there is a duplicate of the R class (it's in scandroid and also the R*.class files). How can I modify the above proguard arguments to exclude the R*.class files from the scandroid.jar so the dalvik converter is happy? Edit: I should note that I tried adding ;bin/org/jeb/R.class;etc... to the -libraryjars argument, and that only seemed to cause it to complain about duplicate classes, and in addition proguard decided to exclude my scala class files too.

    Read the article

  • error in implementing static files in django

    - by POOJA GUPTA
    my settings.py file:- STATIC_ROOT = '/home/pooja/Desktop/static/' # URL prefix for static files. STATIC_URL = '/static/' # Additional locations of static files STATICFILES_DIRS = ( '/home/pooja/Desktop/mysite/search/static', ) my urls.py file:- from django.conf.urls import patterns, include, url from django.contrib.staticfiles.urls import staticfiles_urlpatterns from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', url(r'^search/$','search.views.front_page'), url(r'^admin/', include(admin.site.urls)), ) urlpatterns += staticfiles_urlpatterns() I have created an app using django which seraches the keywords in 10 xml documents and then return their frequency count displayed as graphical representation and list of filenames and their respective counts.Now the list has filenames hyperlinked, I want to display them on the django server when user clicks them , for that I have used static files provision in django. Hyperlinking has been done in this manner: <ul> {% for l in list1 %} <li><a href="{{STATIC_URL}}static/{{l.file_name}}">{{l.file_name}}</a{{l.frequency_count</li> {% endfor %} </ul> Now when I run my app on the server, everything is running fine but as soon as I click on the filename, it gives me this error : Using the URLconf defined in mysite.urls, Django tried these URL patterns, in this order: ^search/$ ^admin/ ^static\/(?P<path>.*)$ The current URL, search/static/books.xml, didn't match any of these. I don't know why this error is coming, because I have followed the steps required to achieve this. I have posted my urls.py file and it is showing error in that only. I'm new to django , so Please help

    Read the article

  • Source Control Checkin Comments at Top Of Source Files

    - by James Wiseman
    I've noticed a discrepancy with some source files in our system whereby some contain source-control checkin comments, and some do not. These comments are added automatically to the top of the file when it is checked in: * $Log: //vm1/Projects/Morpheus/Sleep.bdy-arc $ -- -- Rev 1.14 Apr 14 2009 15:32:52 John Smith --Fixed bugs 2292 and 2230. This seems to have been quite prevelant in all the compainies with which I have worked, but I must confess that I struggle to see the point. Generally the comments aren't that good, are ofen left by people who have long since departed, and even when they are of a high standard it is difficult to tie them to physical code changes. It also strikes me, that you are physically changing the file that you are checking in. Now, this may not be such a problem with files that will be compiled, but could be a disaster with others, e.g. JavaScript files. So really, my query is what was the motivation in concept behind providing this functionality in the first instance? Does anyone actually find these comments useful? Also, I would be curious to know if this was feature that is commonly supported within Source Control systems. I am aware of it with PVCS, VSS and Subversion (Subversion Keyword Substitution), however I wonder if it is also available in some of the more popular DVCSs. Your help, as always is much appreciated.

    Read the article

  • Importing owl files

    - by Mikae Combarado
    Hello, I have a problem with importing owl files using owl api in Java. I successfully can import 2 owl files. However, a problem occurs, when I try to import 3 or more owl files that are integrated to each other. E.g. Base.owl -- base ontology Electronics.owl -- electronics ontology which imports Base.owl Telephone.owl -- telephone ontology which imports Base.owl and Electronics.owl When, I just import Base.owl and run Electronics.owl, it works smoothly. The code is given below: File fileBase = new File("filepath/Base.owl"); File fileElectronic = new File("filePath/Electronic.owl"); SimpleIRIMapper iriMapper = new SimpleIRIMapper(IRI.create("url/Base.owl"), IRI.create(fileBase)); OWLOntologyManager manager = OWLManager.createOWLOntologyManager(); manager.addIRIMapper(iriMapper); OWLOntology ont = manager.loadOntologyFromOntologyDocument(fileElectronic); However, when I want to load Telephone.owl, I just create an additional iriMapper and add it to the manager. The additional code is shown with ** : File fileBase = new File("filepath/Base.owl"); File fileElectronic = new File("filePath/Electronic.owl"); **File fileTelephone = new File("filePath/Telephone.owl");** SimpleIRIMapper iriMapper = new SimpleIRIMapper(IRI.create("url/Base.owl"), IRI.create(fileBase)); **SimpleIRIMapper iriMapper2 = new SimpleIRIMapper(IRI.create("url/Electronic.owl"), IRI.create(fileElectronic));** OWLOntologyManager manager = OWLManager.createOWLOntologyManager(); manager.addIRIMapper(iriMapper); **manager.addIRIMapper(iriMapper2);** OWLOntology ont = manager.loadOntologyFromOntologyDocument(**fileTelephone**); The code shown above gives this error : Could not load import: Import(url/Electronic.owl>) Reason: Could not loaded imported ontology: <url/Base.owl> Cause: null It would be really appreciated, if someone gives me a hand... Thanks in advance...

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >