Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 348/714 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • MVC2: Validate PartialView before Form Submit of Page containing Partial View

    - by Pascal
    I am using asp.net mvc2 and having a basic Page that includes a Partial View within a form <% using (Html.BeginForm()) { %> <% Html.RenderAction("partialViewActionName", "Controllername"); %> <input type="submit" value="Weiter" /> <% } %> When I submit the form, the httpPost Action of my Page is called, and AFTER that the httpPost Action of my Partial View is called [HttpPost] public virtual ActionResult PagePostMethod(myModel model) { // here I should know about the validation of my partial View // If partialView.ModelState is valid then // return View("success"); // else return View(model) } [HttpPost] public virtual ActionResult partialViewActionName(myModel model) { ModelState.AddModelError("Error"); return View(model); } But as I am doing the Validation in the httpPost Method of my Partial View (because I want to use my Partial View in several Places) I cant decide if my hole page is valid or not. Has anyone an Idea how I could do this? Isn´t it a common task to have several partial Views in a page but have the information about validation in the page action methods? Thanks very much for your help!!

    Read the article

  • How to analyze 'dbcc memorystatus' result in SQL Server 2008

    - by envykok
    Currently i am facing a sql memory pressure issue. i have run 'dbcc memorystatus', here is part of my result: Memory Manager KB VM Reserved 23617160 VM Committed 14818444 Locked Pages Allocated 0 Reserved Memory 1024 Reserved Memory In Use 0 Memory node Id = 0 KB VM Reserved 23613512 VM Committed 14814908 Locked Pages Allocated 0 MultiPage Allocator 387400 SinglePage Allocator 3265000 MEMORYCLERK_SQLBUFFERPOOL (node 0) KB VM Reserved 16809984 VM Committed 14184208 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 0 MultiPage Allocator 408 MEMORYCLERK_SQLCLR (node 0) KB VM Reserved 6311612 VM Committed 141616 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 1456 MultiPage Allocator 20144 CACHESTORE_SQLCP (node 0) KB VM Reserved 0 VM Committed 0 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 3101784 MultiPage Allocator 300328 Buffer Pool Value Committed 1742946 Target 1742946 Database 1333883 Dirty 940 In IO 1 Latched 18 Free 89 Stolen 408974 Reserved 2080 Visible 1742946 Stolen Potential 1579938 Limiting Factor 13 Last OOM Factor 0 Page Life Expectancy 5463 Process/System Counts Value Available Physical Memory 258572288 Available Virtual Memory 8771398631424 Available Paging File 16030617600 Working Set 15225597952 Percent of Committed Memory in WS 100 Page Faults 305556823 System physical memory high 1 System physical memory low 0 Process physical memory low 0 Process virtual memory low 0 Procedure Cache Value TotalProcs 11382 TotalPages 430160 InUsePages 28 Can you lead me to analyze this result ? Is it a lot execute plan have been cached causing the memory issue or other reasons?

    Read the article

  • SSRS 2008 Report Manager Error

    - by Nick
    I have just installed SQL Server 2008 including Reporting Services on Windows Server 2003. I'm having a problem though accessing the Report Manager. When the Reporting Service is first started I can access it fine but after maybe an hour when I try and access it I get an error saying: Unable to connect to the remote server. The reporting service is still running at this point. I can connect to it through Reporting Services Configuration Manager and clicking on the Web Service URL gives a directory listing (I assume that is correct behaviour). If I stop and start the service through Reporting Services Configuration Manager then I can access Report Manager once again (although in maybe an hour I will get the same error once again). I've installed the latest SP1 service pack. I'm using the same domain account to run all the SQL services. The report server is set to use the default ReportServer virtual directory, is set to IP address All Assigned, TCP Port 80 and no SSL certificate. The report manager is set to use the default Reports virtual directory, IP address All Assigned, TCP Port 80 and no SSL certificates. In the log file I get an error: Unable to connect to remote server HTTP status code 500 An attempt was made to access a socket in a way forbidden by its access permissions. Does anyone have any idea why this is happening? I've searched the net but haven't been able to find a solution.

    Read the article

  • Android SQLite database gets corrupted

    - by Seu
    There are about 100 people using my Android App right now and every once and while I get a crash report to the server with this stack trace: android.database.sqlite.SQLiteDatabaseCorruptException: database disk image is malformed at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2596) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2621) at android.app.ActivityThread.access$2200(ActivityThread.java:126) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1932) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:123) at android.app.ActivityThread.main(ActivityThread.java:4595) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:521) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) at dalvik.system.NativeStart.main(Native Method) Caused by: android.database.sqlite.SQLiteDatabaseCorruptException: database disk image is malformed at android.database.sqlite.SQLiteQuery.native_fill_window(Native Method) at android.database.sqlite.SQLiteQuery.fillWindow(SQLiteQuery.java:75) at android.database.sqlite.SQLiteCursor.fillWindow(SQLiteCursor.java:295) at android.database.sqlite.SQLiteCursor.getCount(SQLiteCursor.java:276) at android.database.AbstractCursor.moveToPosition(AbstractCursor.java:171) at android.database.AbstractCursor.moveToFirst(AbstractCursor.java:248) The result is the app crashing and all the data in the DB being lost. One thing to note is that every time I read or write to the database I get a new SQLiteDatabase and close it as soon as I'm done. I thought this would simplify things, but perhaps that's causing the problem? Is it possible this is just a SQLite bug?

    Read the article

  • "Admin module" taking over [Yii framework]

    - by Flavius
    Hi I have an "admin" module and I want it to serve "dynamic controllers", i.e. to provide a default behaviour for controllers which don't really exist ("virtual controllers"). I've invented a lightweight messaging mechanism for loose communication between modules. I'd like to use it such that when e.g. ?r=admin/users/index is requested, it will call the "virtual controller" "UserController" of AdminModule, which would, by default, use this messaging mechanism to notify the real module "UsersModule" it can answer to the request. I thought about simulating this behaviour in AdminModule::init(), but at that point I have no way of deciding whether the action can be processed by a real controller or not, or at least I don't know how to do it. This is because of the way Yii works: bottom-up, the controller is the one that renders the view AND the application layout (or the module's, if it exists), for example. I don't think the module even has a word to say about handling a given controller+action or not. To recap, I'm looking for a kind of CWebModule::missingController($controllerId,$actionId), just like CController::missingAction($actionId), or a workaround to simulate that. That would possibly be in CWebModule::init() or somewhere where I can find out whether the controller actually exists, in which case it's his job to handle it the $actionID and $controllerID whether the module $controllerID exists (I didn't type it wrong, in r=admin/users/index, "users" is the actual module, as specified in the application's config).

    Read the article

  • Fluent nhibernate: Enum in composite key gets mapped to int when I need string

    - by Quintin Par
    By default the behaviour of FNH is to map enums to its string in the db. But while mapping an enum as part of a composite key, the property gets mapped as int. e.g. in this case public class Address : Entity { public Address() { } public virtual AddressType Type { get; set; } public virtual User User { get; set; } Where AddresType is of public enum AddressType { PRESENT, COMPANY, PERMANENT } The FNH mapping is as mapping.CompositeId().KeyReference(x => x.User, "user_id").KeyProperty(x => x.Type); the schema creation of this mapping results in create table address ( Type INTEGER not null, user_id VARCHAR(25) not null, and the hbm as <composite-id mapped="true" unsaved-value="undefined"> <key-property name="Type" type="Company.Core.AddressType, Company.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"> <column name="Type" /> </key-property> <key-many-to-one name="User" class="Company.Core.CompanyUser, Company.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"> <column name="user_id" /> </key-many-to-one> </composite-id> Where the AddressType should have be generated as type="FluentNHibernate.Mapping.GenericEnumMapper`1[[Company.Core.AddressType, How do I instruct FNH to mappit as the default string enum generic mapper?

    Read the article

  • Recover files from corrupt filesystem

    - by Emile 81
    My situation: I have an older 80GB IDE internal hdd, with a few files on that I would like very much to recover: some word documents some latex documents (text files) and pictures (png, jpg, eps files) some other text documents and visual studio project files I had backed them (not the latex ones though) up using svn, but have not committed lately, and would loose a lot of work if I cant recover. the hdd seems to have lost its filesystem, i have no idea how it came about. I know it has/had 3 NTFS partitions, i know the files i want are on the second or third partition. I read http://superuser.com/questions/81877/recover-hard-disk-data Partition Find and Mount did not see all the partitions using intelligent scan TestDisk does (i think), I followed the step by step instructions here, but when I try to list the files it says: "Can't open filesystem, filesystem seems damaged." I'm not sure how to proceed here, as TestDisks wiki does not contain this error message afaik. I don't know if the hdd is gonna fail, or some prog has caused the filesystem to be corrupt, the hdd doesnt make a sound, so i guess that's good. I would like some guidance so I don't accidentally cause more damage. (eg. is it ok to let testdisk write the filesystem to disk? I'm pretty the partitions are listed ok, but not 100%)

    Read the article

  • Deployed Qt5 Application Doesn't Print or Show Print Dialog

    - by MustacheMcLimey
    I'm experiencing Qt4 to Qt5 troubles. In my application when the user clicks the print button two things should happen, one is that a PDF gets written to disk (which still works fine in the new version, so I know that some of the printing functions are working properly) and the other is that a QPrintDialog should exec() and then send to a connected printer. I see the dialog when I launch from my development machine. The application launches on the deployed machine, but the QPrintDialog never shows and the document never prints. I am including print support. QT += core gui network webkitwidgets widgets printsupport I have been using Process Explorer to see what DLLs the application uses on my development machine, and I believe that everything is present. My application bundle includes: {myAppPath}\MyApp[MyApp.exe, Qt5PrintSupport.dll, ...] {myAppPath}\plugins\printsupport\windowsprintersupport.dll {myAppPath}\plugins\imageformats[ qgif.dll, qico.dll,qjpeg.dll, qmng.dll, qtga.dll, qtiff.dll, qwbmp.dll ] The following is the relevant code snippet: void PrintableForm::printFile() { //Writes the PDF to disk in every environment pdfCopy(); //Paper Copy only works on my dev machine QPrinter paperPrinter; QPrintDialog printDialog(&paperPrinter,this); if( printDialog.exec() == QDialog::Accepted ) { view->print(&paperPrinter); } this->accept(); } My first thought is that the relevant DLLs are not being found come print time, and that means that my application file system is incorrect, but I have not found anything that shows me a different file structure. Am I on the right track or is there something else wrong with this setup?

    Read the article

  • IIS 7 can't connect to SQLServer 2008

    - by Nicolas Cadilhac
    Sorry if this is the most seen question on the web, but this is my turn. I am trying to publish my asp.net mvc app on IIS 7 under MS Sql Server 2008. I am on a Windows Server 2008 virtual machine. I get the following classical error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) Under SQLServer, Allow remote connections is checked. My connection string is: Data Source=.\MSSQLSERVER;Initial Catalog=mydbname;User Id=sa;Password=mypassword I also tried with no username/password and "Integrated Security=true". There is only one instance of SQLServer installed. I tried to access my web page locally and remotely. There is no active firewall on the virtual machine. Thx for your help.

    Read the article

  • handling refrence to pointers/double pointers using SWIG [C++ to Java]

    - by Siddu
    My code has an interface like class IExample { ~IExample(); //pure virtual methods ...}; a class inheriting the interface like class CExample : public IExample { protected: CExample(); //implementation of pure virtual methods ... }; and a global function to create object of this class - createExample( IExample *& obj ) { obj = new CExample(); } ; Now, I am trying to get Java API wrapper using SWIG, the SWIG generated interface has a construcotr like - IExample(long cPtr, boolean cMemoryOwn) and global function becomes createExample(IExample obj ) The problem is when i do, IExample exObject = new IExample(LogFileLibraryJNI.new_plong(), true /*or false*/ ); createExample( exObject ); The createExample(...) API at C++ layer succesfully gets called, however, when call returns to Java layer, the cPtr (long) variable does not get updated. Ideally, this variable should contain address of CExample object. I read in documentation that typemaps can be used to handle output parameters and pointer references as well; however, I am not able to figure out the suitable way to use typemaps to resolve this problem, or any other workaround. Please suggest if i am doing something wrong, or how to use typemap in such situation?

    Read the article

  • error C2504: 'BASECLASS' : base class undefined

    - by numerical25
    I checked out a post similar to this but the linkage was different the issue was never resolved. The problem with mine is that for some reason the linker is expecting there to be a definition for the base class, but the base class is just a interface. Below is the error in it's entirety c:\users\numerical25\desktop\intro todirectx\godfiles\gxrendermanager\gxrendermanager\gxrendermanager\gxdx.h(2) : error C2504: 'GXRenderer' : base class undefined Below is the code that shows how the headers link with one another GXRenderManager.h #ifndef GXRM #define GXRM #include <windows.h> #include "GXRenderer.h" #include "GXDX.h" #include "GXGL.h" enum GXDEVICE { DIRECTX, OPENGL }; class GXRenderManager { public: static int Ignite(GXDEVICE); private: static GXRenderer *renderDevice; }; #endif at the top of GxRenderManager, there is GXRenderer , windows, GXDX, GXGL headers. I am assuming by including them all in this document. they all link to one another as if they were all in the same document. correct me if I am wrong cause that's how a view headers. Moving on... GXRenderer.h class GXRenderer { public: virtual void Render() = 0; virtual void StartUp() = 0; }; GXGL.h class GXGL: public GXRenderer { public: void Render(); void StartUp(); }; GXDX.h class GXDX: public GXRenderer { public: void Render(); void StartUp(); }; GXGL.cpp and GXDX.cpp respectively #include "GXGL.h" void GXGL::Render() { } void GXGL::StartUp() { } //...Next document #include "GXDX.h" void GXDX::Render() { } void GXDX::StartUp() { } Not sure whats going on. I think its how I am linking the documents, I am not sure.

    Read the article

  • Need help using the DefaultModelBinder for a nested model.

    - by Will
    There are a few related questions, but I can't find an answer that works. Assuming I have the following models: public class EditorViewModel { public Account Account {get;set;} public string SomeSimpleStuff {get;set;} } public class Account { public string AccountName {get;set;} public int MorePrimitivesFollow {get;set;} } and a view that extends ViewPage<EditorViewModel> which does the following: <%= Html.TextBoxFor(model => model.Account.AccountName)%> <%= Html.ValidationMessageFor(model => model.Account.AccountName)%> <%= Html.TextBoxFor(model => model.SomeSimpleStuff )%> <%= Html.ValidationMessageFor(model => model.SomeSimpleStuff )%> and my controller looks like: [HttpPost] public virtual ActionResult Edit(EditorViewModel account) { /*...*/ } How can I get the DefaultModelBinder to properly bind my EditorViewModel? Without doing anything special, I get an empty instance of my EditorViewModel with everything null or default. The closest I've come is by calling UpdateModel manually: [HttpPost] public virtual ActionResult Edit(EditorViewModel account) { account.Account = new Account(); UpdateModel(account.Account, "Account"); // this kills me: UpdateModel(account); This successfully updates my Account property model, but when I call UpdateModel on account (to get the rest of the public properties of my EditorViewModel) I get the completely unhelpful "The model of type ... could not be updated." There is no inner exception, so I can't figure out what's going wrong. What should I do with this?

    Read the article

  • SQL Server slow in production environment

    - by Lieven Cardoen
    I have a weird problem in a customer's production environment. I can't give any details on the infrastructure, except that SQL server runs on a virtual server. The data, log and filestream file are on another storage server (data and filestream together and log on a separate server). In our local Test environment, there's one particular query that executes with these durations: first we clear the cache 300ms (First time it takes longer, but from then on it's cached.) 20ms 15ms 17ms In the customer's production environment, the SQL Server is more powerful, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms 2600ms 2400ms The servers in the customer's production environment are more powerful but they do have virtual servers (we don't). What could be the cause... Not enough memory? Fragmentation? Physical storage? How would you tackle this performance problem? EDIT: Some people have asked me if the data set is equal and it is. I restored their database on our environment. It's true that this was the first thing I looked at. (@Everyone: I added the edit because it will be the first thing that many will think off).

    Read the article

  • XenServer 5.5 local storage problem

    - by Jason Nerer
    Hi community, I have the following problem with a Citrix XenServer 5.5. I had to physically move the host, so I shut down all machines via console: xe vm-shutdown force=true vm=my-machine-uuis-s After that I shut down the machine itself by issuing: halt After the reboot today the local storage repository is unplugged. I was trying to repair it via XenCenter, but I don't trust this one. So I tried: [root@xenserver ~]# xe pbd-list uuid ( RO) : ef6e2f3b-5825-393c-23e1-391d105c87ec host-uuid ( RO): c4bcf09c-2e52-448f-8210-df5d13bd33a9 sr-uuid ( RO): 2fb3be9c-075c-53ed-acb6-42f0c4ad0614 device-config (MRO): device: /dev/disk/by-id/scsi-SATA_WDC_WD5001ABYS-_WD-WCAS83698154,/dev/disk/by-id/scsi-SATA_WDC_WD5001ABYS-_WD-WCAS83694262 currently-attached ( RO): false To reattach the storage I issued: xe pbd-plug uuid=ef6e2f3b-5825-393c-23e1-391d105c87ec That one is running now for a while but not talking to me. The local repo has around 1TB. Should I wait, or are there any other options to reattach the local repo? What could have caused this problem? Any ideas? Thx. J

    Read the article

  • Fluent NHibernate IDictionary with composite element mapping

    - by Alessandro Di Lello
    Hi there, i have these 2 classes: public class Category { IDictionary<string, CategoryResorce> _resources; } public class CategoryResource { public virtual string Name { get; set; } public virtual string Description { get; set; } } and this is xml mapping <class name="Category" table="Categories"> <id name="ID"> <generator class="identity"/> </id> <map name="Resources" table="CategoriesResources" lazy="false"> <key column="EntityID" /> <index column="LangCode" type="string"/> <composite-element class="Aca3.Models.Resources.CategoryResource"> <property name="Name" column="Name" /> <property name="Description" column="Description"/> </composite-element> </map> </class> and i'd like to write it with Fluent. I found something similar and i was trying with this code: HasMany(x => x.Resources) .AsMap<string>("LangCode") .AsIndexedCollection<string>("LangCode", c => c.GetIndexMapping()) .Cascade.All() .KeyColumn("EntityID"); but i dont know how to map the CategoryResource entity as a composite element inside the Category element. Any advice ? thanks

    Read the article

  • Classic ASP Session not working in IIS 7 Windows Server 2008 R2 x64

    - by user553361
    Hi, I've been googleing and searching here info about this but so far couldn't find anything relevant to my problem. We have a website currently working on II6 and Windows Server 2003 (x86) without any problem. Now we want to migrate our server to a Virtual Machine with Windows Server 2008 R2 (x64) and IIS7. Out current app is built in Classic ASP and SQL Server (This one located on a 2nd Server but this is staying the way it is now). The website is configured as a WebSite, not a virtual directory. Using DefaultAppPool with 4 applications. Now, the problem I'm getting is with the Sessions, or at least that's what I think since I created a simple hello.asp with this code <% response.write "Hello" response.write Session.SessionID %> And this is giving us this result: Hello error '8002801d' /hello.asp, line 3 ASP Sessions Properties Enable Session State : True Maximum Sessions : 2147483647 New ID On Secure Connection : True Time-out : 20 min This is the log in Event Viewer Warning 24/12/2010 14:03:42 Active Server Pages 9 None FailedReqLog Url http://apps.shocklogic.com:80/hello.asp App Pool DefaultAppPool Authentication anonymous User from token NT AUTHORITY\IUSR Activity ID {00000000-0000-0000-1400-0080000000F8} Site 1 Process 3312 Failure Reason STATUS_CODE Trigger Status 500 Final Status 500 Time Taken 110 msec Would be great if anyone has any ideas. Thanks, Federico

    Read the article

  • Unable to fetch initial output of "defrag" commad in Windows Server 2008 R2 in WOW64 environment.

    - by Ganesh
    Hi All, [Application & code back ground] I have an MFC application which is executing on Windows Server 2008 R2 in WOW64 environment. In which on user input it defragments the selected drive on the disk. I initiated the process(.i.e. cmd /c defrag –v c:) of defragmentation using the CreateProcess() API, along with this to display output of the process on the main screen I created the pipe using CreatePipe() API. I used PeekNamedPipe() & ReadFile() API to get the output and display. [Problem Area] When the process is launched I am not getting the initial output as below: Microsoft Disk fragmenter Copyright © 2007 Microsoft Crop. Invoking defragmentation on (C:)…. I constantly monitor the output of the process while it is under progress but not able to get any thing as output in the pipe. It seem the process is not doing any thing and appears as if the application is not responding. But after certain period of time, once the process is about to completed I get result along with the initial data. [Sample code] //Pipe created if(0 == ::CreatePipe(&l_hStdOutRead, &l_hStdOutWrite, &l_SecurityAttribute, (DWORD)NULL)) { //Error code } //Process created/launched if (0 == ::CreateProcess(NULL, (LPTSTR)f_csProcessName, &l_stSecurityAttributes, NULL, TRUE, CREATE_NO_WINDOW, NULL,NULL, &l_StartupInfo, &l_CmdPI)) { //Error code } //Read output if (0 == ::PeekNamedPipe(m_hStdOutRead, l_cArrPeekBuffer, (DWORD)NULL, (LPDWORD)NULL, &l_dwAvailable, (LPDWORD)NULL)) { //Return to read again } if (MPLUSFALSE == ::ReadFile(m_hStdOutRead, l_cArrOutput, min(BUFFER_SIZE - 2, l_dwAvailable), &l_dwRead, NULL) || !l_dwRead) { //error code } //Display data. If any one is aware of similar problem or worked on the similar issue please let me know the solution. Thanks in Advance, Ganesh

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • NSIS takes ownership of IIS system files

    - by Lucas
    I recently encountered an issue with NSIS that I believe is related to an interaction with UAC, but I am at a loss to explain it and I do not know how to prevent it in the future. I have an installer that creates and removes IIS virtual directories using the NsisIIS plugin. The installer appeared worked correctly on my Windows 7 workstation. When the installer was run on a Windows 2008 R2 server it installed properly, but the uninstaller removed all of the virtual directories and put IIS is an unusable state; to the point that I had to remove the Default Web Site and re-add it. What I eventually found was that all of the IIS configuration files under C:\Windows\System32\inetsrv\config had a lock icon on them. Some investigation seem to indicate that this means a user account has taken ownership of the file, however all the files listed SYSTEM as the file owner. I did check a different server that I have not run the installer on, and it does not have the lock icon applied to the IIS files. I have also seen the same lock icon appear on other files that the NSIS installer creates. For instance, I have a Web.Config.tpl file that is processed using the NSIS ReplaceInFile which also appears with the lock icon after the installer finished. After I explicitly grant another user account access to the file, the lock icon goes away. I run the installer under the local Administrator account on the 2008 R2 server, so I do not get the UAC prompt. Here is the relevant code from the install.nsi file RequestExecutionLevel admin Section "Application" APP_SECTION SectionIn RO Call InstallApp SectionEnd Section "un.Uninstaller Section" Delete "$PROGRAMFILES\${PROGRAMFILESDIR}\Uninstall.exe" Call un.InstallApp SectionEnd Function InstallApp File /oname=Web.Config Web.Config.tpl !insertmacro ReplaceInFile Web.Config %CONNECTION_STRING% $CONNECTION_STRING FunctionEnd Function un.InstallApp ReadRegStr $0 HKLM "Software\${REGKEY}" "VirtualDir" NsisIIS::DeleteVDir "$0" Pop $0 FunctionEnd I have three questions stemming from this incident: How did this happen? How can I fix my installer to prevent it from happening again? How can I repair the permissions on the IIS config files.

    Read the article

  • Write asynchronously to file in perl

    - by Stefhen
    Basically I would like to: Read a large amount of data from the network into an array into memory. Asynchronously write this array data, running it thru bzip2 before it hits the disk. repeat.. Is this possible? If this is possible, I know that I will have to somehow read the next pass of data into a different array as the AIO docs say that this array must not be altered before the async write is complete. I would like to background all of my writes to disk in order as the bzip2 pass is going to take much longer than the network read. Is this doable? Below is a simple example of what I think is needed, but this just reads a file into array @a for testing. use warnings; use strict; use EV; use IO::AIO; use Compress::Bzip2; use FileHandle; use Fcntl; my @a; print "loading to array...\n"; while(<>) { $a[$. - 1] = $_; } print "array loaded...\n"; my $aio_w = EV::io IO::AIO::poll_fileno, EV::WRITE, \&IO::AIO::poll_cb; aio_open "./out", O_WRONLY || O_NONBLOCK, 0, sub { my $fh = shift or die "error while opening: $!\n"; aio_write $fh, undef, undef, $a, -1, sub { $_[0] > 0 or die "error: $!\n"; EV::unloop; }; }; EV::loop EV::LOOP_NONBLOCK;

    Read the article

  • Error Ant Build/deploy to websphere 7.0

    - by adisembiring
    Hi I'm trying to build/deploy war to websphere process server 7.0. and I run on windows environment. I use http://illegalargumentexception.blogspot.com/2008/08/ant-automated-deployment-to-websphere.html as my reference. and http://illegalargumentexception.googlecode.com/svn/trunk/code/java/WebSphereAntFiles/ as my sample code to deployed. this is my buil.properies is ? #build properties mywebappear=D:/data/code/WebSphereAntFiles/scripts/test/mywebappEAR.ear #WAS6 install directory was_home=C:/IBM/WID7_WTE/runtimes/bi_v7 #server name (see cell/node/server; e.g. "server1") was_server=server1 #user + password; for use when security is enabled was_user=admin was_password=admin #stops scripts on problem was_failonerror=true #virtual host was_virtualhost=default_host #Absolute path to EAR file #was_ear=fooEAR.ear #Name of the enterprise application #was_appname=fooEAR this is my console while I trying to build with ws_ant.bat [wsDefaultBindings] mywebapp.war [wsDefaultBindings] <virtual-host> --> default_host [wsDefaultBindings] [wsDefaultBindings] ------------------------ [wsDefaultBindings] Saving EAR File to directory [wsDefaultBindings] Saved EAR File to directory Successfully test_wsStartServer: WAS_wsStartServer: depCheck: depCheck: [startServer] ADMU0116I: Tool information is being logged in file [startServer] C:\IBM\WID7_WTE\runtimes\bi_v7\profiles\qwps\logs\server1\startServer.log [startServer] ADMU0128I: Starting tool with the qwps profile [startServer] ADMU3100I: Reading configuration for server: server1 [startServer] ADMU3028I: Conflict detected on port 8880. Likely causes: a) An instance of [startServer] the server server1 is already running b) some other process is [startServer] using port 8880 [startServer] ADMU3027E: An instance of the server may already be running: server1 [startServer] ADMU0111E: Program exiting with error: [startServer] com.ibm.websphere.management.exception.AdminException: ADMU3027E: An [startServer] instance of the server may already be running: server1 [startServer] ADMU1211I: To obtain a full trace of the failure, use the -trace option. [startServer] ADMU0211I: Error details may be seen in the file: [startServer] C:/IBM/WID7_WTE/runtimes/bi_v7/profiles/qwps\logs\server1\startServer.log BUILD FAILED D:\data\code\WebSphereAntFiles\scripts\test\build.xml:68: The following error occurred while executing this line: D:\data\code\WebSphereAntFiles\scripts\was\wsStartServer.xml:49: Java returned: -1

    Read the article

  • Apache is sending php files to my browser instead of parsing

    - by justen doherty
    i have to setup php on an existing web host, i have made a virtual host entry but for some reason apache is sending the php to the browser instead of parsing.. from googling around it looks like its a problem with the mimetypes, but im not an apache expert by any means - so if anyone can help it would be appreciated... i have the following in my httpd.conf AddHandler php5-script php DirectoryIndex index.html index.phtml index.php index.phps AddType application/x-httpd-php .phtml AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps the php module is loaded into apache: /usr/sbin/apachectl -M Loaded Modules: core_module (static) mpm_prefork_module (static) http_module (static) so_module (static) auth_basic_module (shared) auth_digest_module (shared) authn_file_module (shared) authn_alias_module (shared) authn_anon_module (shared) authn_dbm_module (shared) authn_default_module (shared) authz_host_module (shared) authz_user_module (shared) authz_owner_module (shared) authz_groupfile_module (shared) authz_dbm_module (shared) authz_default_module (shared) ldap_module (shared) authnz_ldap_module (shared) include_module (shared) log_config_module (shared) logio_module (shared) env_module (shared) ext_filter_module (shared) mime_magic_module (shared) expires_module (shared) deflate_module (shared) headers_module (shared) usertrack_module (shared) setenvif_module (shared) mime_module (shared) dav_module (shared) status_module (shared) autoindex_module (shared) info_module (shared) dav_fs_module (shared) vhost_alias_module (shared) negotiation_module (shared) dir_module (shared) actions_module (shared) speling_module (shared) userdir_module (shared) alias_module (shared) rewrite_module (shared) proxy_module (shared) proxy_balancer_module (shared) proxy_ftp_module (shared) proxy_http_module (shared) proxy_connect_module (shared) cache_module (shared) suexec_module (shared) disk_cache_module (shared) file_cache_module (shared) mem_cache_module (shared) cgi_module (shared) version_module (shared) fcgid_module (shared) perl_module (shared) php5_module (shared) proxy_ajp_module (shared) ssl_module (shared) and this is my virtual host entry: ServerName viridor-cms.co.uk ServerAlias www.viridor-cms.co.uk UseCanonicalName Off DocumentRoot /var/www/vhosts/viridor-cms.co.uk/httpdocs CustomLog /var/www/vhosts/viridor-cms.co.uk/cms-access_log common ErrorLog /var/www/vhosts/viridor-cms.co.uk/cms-error_log DirectoryIndex index.php index.html php_admin_flag engine on php_admin_flag safe_mode on php_admin_flag engine on php_admin_flag safe_mode on AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps please help, my head is so sore from banging it againest the table and the wall !

    Read the article

  • Workaround for GNU Make 3.80 eval bug

    - by bengineerd
    I'm trying to create a generic build template for my Makefiles, kind of like they discuss in the eval documentation. I've run into a known bug with GNU Make 3.80. When $(eval) evaluates a line that is over 193 characters, Make crashes with a "Virtual Memory Exhausted" error. The code I have that causes the issue looks like this. SRC_DIR = ./src/ PROG_NAME = test define PROGRAM_template $(1)_SRC_DIR = $$(SRC_DIR)$(1)/ $(1)_SRC_FILES = $$(wildcard $$($(1)_SRC_DIR)*.c) $(1)_OBJ_FILES = $$($(1)_SRC_FILES):.c=.o) $$($(1)_OBJ_FILES) : $$($(1)_SRC_FILES) # This is the problem line endef $(eval $(call PROGRAM_template,$(PROG_NAME))) When I run this Makefile, I get gmake: *** virtual memory exhausted. Stop. The expected output is that all .c files in ./src/test/ get compiled into .o files (via an implicit rule). The problem is that $$($(1)_SRC_FILES) and $$($(1)_OBJ_FILES) are together over 193 characters long (if there are enough source files). I have tried running the make file on a directory where there is only 2 .c files, and it works fine. It's only when there are many .c files in the SRC directory that I get the error. I know that GNU Make 3.81 fixes this bug. Unfortunately I do not have the authority or ability to install the newer version on the system I'm working on. I'm stuck with 3.80. So, is there some workaround? Maybe split $$($(1)_SRC_FILES) up and declare each dependency individually within the eval?

    Read the article

  • how to compile with llvm and g++?

    - by Sriram
    Hi, I use a fedora-11 system and recently I installed llvm ( sudo yum -y install llvm llvm-docs llvm-devel ). When I search for llvm I get them in /usr/bin. some of the links to the binaries are broken(llvm-gcc,llvm-g++,llvm-cpp,etc.) the include files are found within /usr/include/llvm and libs at /usr/lib/llvm. How to compile them using g++? I tried to compile the kaleidoscope code given in the tutorial (http://llvm.org/docs/tutorial/LangImpl3.html) as per directed, but it fails to compile.. I get this... toy.cpp:5:30: error: llvm/LLVMContext.h: No such file or directory toy.cpp:352: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In member function ‘virtual llvm::Value* NumberExprAST::Codegen()’: toy.cpp:358: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In member function ‘virtual llvm::Value* BinaryExprAST::Codegen()’: toy.cpp:379: error: ‘getDoubleTy’ is not a member of ‘llvm::Type’ toy.cpp:379: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In member function ‘llvm::Function* PrototypeAST::Codegen()’: toy.cpp:407: error: ‘getDoubleTy’ is not a member of ‘llvm::Type’ toy.cpp:407: error: ‘getGlobalContext’ was not declared in this scope toy.cpp:408: error: ‘getDoubleTy’ is not a member of ‘llvm::Type’ toy.cpp: In member function ‘llvm::Function* FunctionAST::Codegen()’: toy.cpp:454: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In function ‘int main()’: toy.cpp:543: error: ‘LLVMContext’ was not declared in this scope toy.cpp:543: error: ‘Context’ was not declared in this scope toy.cpp:543: error: ‘getGlobalContext’ was not declared in this scope I cannot find the LLVMContext.h file too. so i guess this might be a version problem. what should i do to make it work? some help would be good! thanks in advance... :)

    Read the article

  • java distributed cache for low latency, high availability

    - by Shahbaz
    I've never used distributed caches/DHTs like memcached, jboss cache, ehcache, etc. I'm wondering which, if any, is appropriate for my use. First, I'm not doing web applications (as most of these project seem to be geared towards web apps). I write servers (Order Management Systems actually) for financial trading firms. The servers themselves are not too complicated. They need to receive information (market data, orders, executions, etc.) rout them to their destination while possibly transforming some of these messages. I am looking at these products to solve the following problems: * Safe repository of the state of the server. I'd rather build the logic of my application as a bunch of transformers (similar to Apache Camel) and store the state in a 'safe' place * This repository should be distributed: in case one of these data stores crashes, one or two more should be up and I should be able to switch to them seamlessly * This repository should be fast. Single digits milliseconds count here, in other words, systems which consume/process this data are automated systems, not humans clicking on links. This system needs to have high-throughput and low latency. By sending my data outside the process, I am necessarily slowing performance, but I am trying to balance absolute raw speed and absolute protection of data. * This repository should be safe. Similar to the point about several on-line backups, this system needs to write data to disk (potentially more than one disk). I'd really like to stop writing my own 'transaction servers.' Am I correct to be looking into projects such as jboss cache, ehcache, etc.? Thanks

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >