Search Results

Search found 21322 results on 853 pages for 'vs 2008'.

Page 497/853 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • Button Onclick event (which is in codbehind) doesn't get triggered in MVC 2

    - by rksprst
    I had an MVC 1.0 web application that was in VS 2008; I just upgraded the project to VS 2010 which automatically upgraded MVC to 2.0. I have a bunch of viewpages have codebehind files that were manually added. The project worked fine before the upgrade, but now the onclick even't don't get triggered. I.e. I have an asp:button with an onclick event that points to a method in the codebehind. When you click the button, the onclick event doesn't get triggered. In fact, when you look at the Page variable, IsPostBack is false. This is really bizarre and I'm wondering if anyone know what happened and how to fix it. I'm thinking it has something to do with the changes in MVC 2.0; but I'm not sure. Any help is really appreciated, I've been trying to figure this out for a while. (deleting the codebehinds and moving that to the controller is not really an option since there is so many pages, moving back to vs 2008 is a last resort as I want to make use of some of the VS 2010 features like performance testing.)

    Read the article

  • Whatfor Visual Studio?! ml, cl, and link exe-cutables would suffice

    - by AntonIO
    It says in /library article /9s7c9wdw : "You can start this tool [cl.exe] only from the Visual Studio command prompt. You cannot start it from a system command prompt or from Windows Explorer." The corresponding (v=VS.80) page geared towards Visual Studio 2005 makes no such mention. Moreover, there is this Q&A. Thing is: Why should anybody spend anything on VS? ml is provided free of charge- necessarily so since it poses no value addition. The combined size of the other two is 895kb. Uncompressed. The GUI is a disservice. I myself have found half a dozen bugs. However, if the above is true, you'd need the IDE. MSFT fanboys, please step up. Background is that I have the 2008 Pro ed. The official Firefox builds use VS 2005 which I have on another system. To me no redundancy is acceptable. That's when I started pondering about boiling down VS and merely copying over the essential binaries. Then extended the thought to synthetically updating V$.

    Read the article

  • Visual Studio hangs when deploying a cube

    - by Richie
    Hello All, I'm having an issue with an Analysis Services project in Visual Studio 2005. My project always builds but only occasionally deploys. No errors are reported and VS just hangs. This is my first Analysis Services project so I am hoping that there is something obvious that I am just missing. Here is the situation I have a cube that I have successfully deployed. I then make some change, e.g., adding a hierarchy to a dimension. When I try to deploy again VS hangs. I have to restart Analysis Services to regain control of VS so I can shut it down. I restart everything sometimes once, sometimes twice or more before the project will eventually deploy. This happens with any change I make there seems to be no pattern to this behaviour. Sometimes I have to delete the cube from Analysis Services before restarting everything to get a successful deploy. Also I have successfully deployed the cube, and then subsequently successfully reprocessed a dimension then when I open a query window in SQL Server Management Studio it says that it can find any cubes. As a test I have deployed a cube successfully. I have then deleted it in Analysis Services and attempted to redeploy it, without making any changes to the cube, only to have the same behaviour mentioned above. VS just hangs with no reason so I have no idea where to start hunting down the problem. It is taking 15-20 minutes to make a change as simple as setting the NameColumn of a dimension attribute. As you can imagine this is taking hours of my time so I would greatly appreciate any assistance anyone can give me.

    Read the article

  • how to make a software and preserve database integrity and correctness and please help confused

    - by user287745
    i have made an application project in vs 08 c#, sql server from vs 08. the database has like 20 tables and many fields in each have made an interface for adding deleting editting and retrieving data according to predefined needs of the users. now i have to 1) make to project in to a software which i can deliver to professor. that is he can just double click the icon and the software simply starts. no vs 08 needed to start the debugging 2) the database will be on one powerful computer (dual core latest everything win xp) and the user will access it from another computer connected using LAN i am able to change the connection string to the shared database using vs 08/ debugger whenever the server changes but how am i supposed to do that when its a software? 3)there will by many clients am i supposed to give the same software to every one, so they all can connect to the database, how will the integrity and correctness of the database be maintained? i mean the db.mdf file will be in a folder which will be shared with read and write access. so its not necessary that only one user will write at a time. so is there any coding for this or? please help me out here i am stuck do not know what to do i have no practical experience, would appreciate all the help thank you

    Read the article

  • How do I make software that preserves database integrity and correctness? Please help, confused.

    - by user287745
    i have made an application project in vs 08 c#, sql server from vs 08. the database has like 20 tables and many fields in each have made an interface for adding deleting editting and retrieving data according to predefined needs of the users. now i have to 1) make to project in to a software which i can deliver to professor. that is he can just double click the icon and the software simply starts. no vs 08 needed to start the debugging 2) the database will be on one powerful computer (dual core latest everything win xp) and the user will access it from another computer connected using LAN i am able to change the connection string to the shared database using vs 08/ debugger whenever the server changes but how am i supposed to do that when its a software? 3)there will by many clients am i supposed to give the same software to every one, so they all can connect to the database, how will the integrity and correctness of the database be maintained? i mean the db.mdf file will be in a folder which will be shared with read and write access. so its not necessary that only one user will write at a time. so is there any coding for this or? please help me out here i am stuck do not know what to do i have no practical experience, would appreciate all the help thank you

    Read the article

  • What /else/ causes this?

    - by Mordachai
    MFC Toolbox Library.lib(SimpleFileIO.obj) : error LNK2005: _wcsnlen already defined in libcmtd.lib(wcslen_s.obj) fatal error LNK1169: one or more multiply defined symbols found This is driving me nuts. Normally, one would get this if the various projects that are a part of their solution do not agree on which CRT to use (single threaded, multi-threaded, release or debug). However, I have been over this thing about 500 times now, and they all agree. Background: this is a VS 2010 project just converted from VS 2008. MFC Toolbox Library.lib is set to compile as a static library, using /MTd, as is the target .exe I am trying to compile in this solution. Further, the solution that this is being converted from (VS 2008) already compiles & links properly!!! So it's not like that there is a disagreement between the two .vcproj's - or at least there wasn't before the conversion. Furthermore, the MFC Toolbox Library is used by about 25 other projects in another solution - and in that solution (Master Build English) it compiles & links against those other projects without complaint in both debug and release targets. I have just spent the last hour going over every single project property for this target project (Cimex Header Viewer) vs. several different target exe projects in Master Build English solution - and I cannot find a difference. They appear to be identical, excepting that they're different names. I've tried doing a clean & build all. I'm simply out of ideas. Does anyone have a thought on what else I might investigate??? I think I'm ready to start chewing glass. :(

    Read the article

  • Now Available &ndash; Windows Azure SDK 1.6

    - by Shaun
    Microsoft has just announced the Windows Azure SDK 1.6 and the Windows Azure Tools for Visual Studio 1.6. Now people can download the latest product through the WebPI. After you downloaded and installed the SDK you will find that The SDK 1.6 can be stayed side by side with the SDK 1.5, which means you can still using the 1.5 assemblies. But the Visual Studio Tools would be upgraded to 1.6. Different from the previous SDK, in this version it includes 4 components: Windows Azure Authoring Tools, Windows Azure Emulators, Windows Azure Libraries for .NET 1.6 and the Windows Azure Tools for Microsoft Visual Studio 2010. There are some significant upgrades in this version, which are Publishing Enhancement: More easily connect to the Windows Azure when publish your application by retrieving a publish setting file. It will let you configure some settings of the deployment, without getting back to the developer portal. Multi-profiles: The publish settings, cloud configuration files, etc. will be stored in one or more MSBuild files. It will be much easier to switch the settings between vary build environments. MSBuild Command-line Build Support. In-Place Upgrade Support.   Publishing Enhancement So let’s have a look about the new features of the publishing. Just create a new Windows Azure project in Visual Studio 2010 with a MVC 3 Web Role, and right-click the Windows Azure project node in the solution explorer, then select Publish, we will find the new publish dialog. In this version the first thing we need to do is to connect to our Windows Azure subscription. Click the “Sign in to download credentials” link, we will be navigated to the login page to provide the Live ID. The Windows Azure Tool will generate a certificate file and uploaded to the subscriptions those belong to us. Then we will download a PUBLISHSETTINGS file, which contains the credentials and subscriptions information. The Visual Studio Tool will generate a certificate and deployed to the subscriptions you have as the Management Certificate. The VS Tool will use this certificate to connect to the subscription in the next step. In the next step, I would back to the Visual Studio (the publish dialog should be stilling opened) and click the Import button, select the PUBLISHSETTINGS file I had just downloaded. Then all my subscriptions will be shown in the dropdown list. Select a subscription that I want the application to be published and press the Next button, then we can select the hosted service, environment, build configuration and service configuration shown in the dialog. In this version we can create a new hosted service directly here rather than go back to the developer portal. Just select the <Create New …> item in the hosted service. What we need to do is to provide the hosted service name and the location. Once clicked the OK, after several seconds the hosted service will be established. If we went to the developer portal we will find the new hosted service in my subscription. a) Currently we cannot select the Affinity Group when create a new hosted service through the Visual Studio Publish dialog. b) Although we can specify the hosted service name and DNS prefixing through the developer portal, we cannot do so from the VS Tool, which means the DNS prefixing would be the same as what we specified for the hosted service name. For example, we specified our hosted service name as “Sdk16Demo”, so the public URL would be http://sdk16demo.cloudapp.net/. After created a new hosted service we can select the cloud environment (production or staging), the build configuration (release or debug), and the service configuration (cloud or local). And we can set the Remote Desktop by check the related checkbox as well. One thing should be note is that, in this version when we set the Remote Desktop settings we don’t need to specify a certificate by default. This is because the Visual Studio will generate a new certificate for us by default. But we can still specify an existing certificate for RDC, by clicking the “More Options” button. Visual Studio Tool will create another certificate for the Remote Desktop connection. It will NOT use the certificate that managing the subscription. We also can select the “Advanced Settings” page to specify the deployment label, storage account, IntelliTrace and .NET profiling information, etc.. Press Next button, the dialog will display all settings I had just specified and it will save them as a new profile. The last step is to click the Publish button. Since we enabled the Remote Desktop feature, the first step of publishing was uploading the certificate. And then it will verify the storage account we specified and upload the package, then finally created the website in Windows Azure.   Multi-Profiles After published, if we back to the Visual Studio we can find a AZUREPUBXML file under the Profiles folder in the Azure project. It includes all settings we specified before. If we publish this project again, we can just use the current settings (hosted service, environment, RDC, etc.) from this profile without input them again. And this is very useful when we have more than one deployment settings. For example it would be able to have one AZUREPUBXML profile for deploying to testing environment (debug building, less roles with RDC and IntelliTrace) and one for production (release building, more roles but without IntelliTrace).   In-Place Upgrade Support Let’s change some codes in the MVC pages and click the Publish menu from the azure project node. No need to specify any settings,  here we can use the pervious settings by loading the azure profile file (AZUREPUBXML). After clicked the Publish button the VS Tool brought a dialog to us to indicate that there’s a deployment available in the hosted service environment, and prompt to REPLACE it or not. Notice that in this version, the dialog tool said “replace” rather than “delete”, which means by default the VS Tool will use In-Place Upgrade when we deploy to a hosted service that has a deployment already exist. After click Yes the VS Tool will upload the package and perform the In-Place Upgrade. If we back to the developer portal we can find that the status of the hosted service was turned to “Updating…”. But in the previous SDK, it will try to delete the whole deployment and publish a new one.   Summary When the Microsoft announced the features that allows the changing VM size via In-Place Upgrade, they also mentioned that in the next few versions the user experience of publishing the azure application would be improved. The target was trying to accomplish the whole publish experience in Visual Studio, which means no need to touch developer portal any more. In the SDK 1.6 we can see from the new publish dialog, as a developer we can do the whole process, includes creating hosted service, specifying the environment, configuration, remote desktop, etc. values without going back the the developer portal.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • ANTS Memory Profiler 7.0

    - by James Michael Hare
    I had always been a fan of ANTS products (Reflector is absolutely invaluable, and their performance profiler is great as well – very easy to use!), so I was curious to see what the ANTS Memory Profiler could show me. Background While a performance profiler will track how much time is typically spent in each unit of code, a memory profiler gives you much more detail on how and where your memory is being consumed and released in a program. As an example, I’d been working on a data access layer at work to call a market data web service.  This web service would take a list of symbols to quote and would return back the quote data.  To help consolidate the thousands of web requests per second we get and reduce load on the web services, we implemented a 5-second cache of quote data.  Not quite long enough to where customers will typically notice a quote go “stale”, but just long enough to be able to collapse multiple quote requests for the same symbol in a short period of time. A 5-second cache may not sound like much, but it actually pays off by saving us roughly 42% of our web service calls, while still providing relatively up-to-date information.  The question is whether or not the extra memory involved in maintaining the cache was worth it, so I decided to fire up the ANTS Memory Profiler and take a look at memory usage. First Impressions The main thing I’ve always loved about the ANTS tools is their ease of use.  Pretty much everything is right there in front of you in a way that makes it easy for you to find what you need with little digging required.  I’ve worked with other, older profilers before (that shall remain nameless other than to hint it was created by a very large chip maker) where it was a mind boggling experience to figure out how to do simple tasks. Not so with AMP.  The opening dialog is very straightforward.  You can choose from here whether to debug an executable, a web application (either in IIS or from VS’s web development server), windows services, etc. So I chose a .NET Executable and navigated to the build location of my test harness.  Then began profiling. At this point while the application is running, you can see a chart of the memory as it ebbs and wanes with allocations and collections.  At any given point in time, you can take snapshots (to compare states) zoom in, or choose to stop at any time.  Snapshots Taking a snapshot also gives you a breakdown of the managed memory heaps for each generation so you get an idea how many objects are staying around for extended periods of time (as an object lives and survives collections, it gets promoted into higher generations where collection becomes less frequent). Generating a snapshot brings up an analysis view with very handy graphs that show your generation sizes.  Almost all my memory is in Generation 1 in the managed memory component of the first graph, which is good news to me, because Gen 2 collections are much rarer.  I once3 made the mistake once of caching data for 30 minutes and found it didn’t get collected very quick after I released my reference because it had been promoted to Gen 2 – doh! Analysis It looks like (from the second pie chart) that the majority of the allocations were in the string class.  This also is expected for me because the majority of the memory allocated is in the web service responses, so it doesn’t seem the entities I’m adapting to (to prevent being too tightly coupled to the web service proxy classes, which can change easily out from under me) aren’t taking a significant portion of memory. I also appreciate that they have clear summary text in key places such as “No issues with large object heap fragmentation were detected”.  For novice users, this type of summary information can be critical to getting them to use a tool and develop a good working knowledge of it. There is also a handy link at the bottom for “What to look for on the summary” which loads a web page of help on key points to look for. Clicking over to the session overview, it’s easy to compare the samples at each snapshot to see how your memory is growing, shrinking, or staying relatively the same.  Looking at my snapshots, I’m pretty happy with the fact that memory allocation and heap size seems to be fairly stable and in control: Once again, you can check on the large object heap, generation one heap, and generation two heap across each snapshot to spot trends. Back on the analysis tab, we can go to the [Class List] button to get an idea what classes are making up the majority of our memory usage.  As was little surprise to me, System.String was the clear majority of my allocations, though I found it surprising that the System.Reflection.RuntimeMehtodInfo came in second.  I was curious about this, so I selected it and went into the [Instance Categorizer].  This view let me see where these instances to RuntimeMehtodInfo were coming from. So I scrolled back through the graph, and discovered that these were being held by the System.ServiceModel.ChannelFactoryRefCache and I was satisfied this was just an artifact of my WCF proxy. I also like that down at the bottom of the Instance Categorizer it gives you a series of filters and offers to guide you on which filter to use based on the problem you are trying to find.  For example, if I suspected a memory leak, I might try to filter for survivors in growing classes.  This means that for instances of a class that are growing in memory (more are being created than cleaned up), which ones are survivors (not collected) from garbage collection.  This might allow me to drill down and find places where I’m holding onto references by mistake and not freeing them! Finally, if you want to really see all your instances and who is holding onto them (preventing collection), you can go to the “Instance Retention Graph” which creates a graph showing what references are being held in memory and who is holding onto them. Visual Studio Integration Of course, VS has its own profiler built in – and for a free bundled profiler it is quite capable – but AMP gives a much cleaner and easier-to-use experience, and when you install it you also get the option of letting it integrate directly into VS. So once you go back into VS after installation, you’ll notice an ANTS menu which lets you launch the ANTS profiler directly from Visual Studio.   Clicking on one of these options fires up the project in the profiler immediately, allowing you to get right in.  It doesn’t integrate with the Visual Studio windows themselves (like the VS profiler does), but still the plethora of information it provides and the clear and concise manner in which it presents it makes it well worth it. Summary If you like the ANTS series of tools, you shouldn’t be disappointed with the ANTS Memory Profiler.  It was so easy to use that I was able to jump in with very little product knowledge and get the information I was looking it for. I’ve used other profilers before that came with 3-inch thick tomes that you had to read in order to get anywhere with the tool, and this one is not like that at all.  It’s built for your everyday developer to get in and find their problems quickly, and I like that! Tweet Technorati Tags: Influencers,ANTS,Memory,Profiler

    Read the article

  • Windows web server and SQL Server on same dedicated server

    - by asinc
    I'm currently trying to decide on the best approach to handle hosting a few moderate traffic websites for production e-commerce and online applications. We'd like to move to a dedicated server and are looking at this as the most likely machine: Quad Core Intel Core2Quad Q9550 Processor, 2.83 Ghz X 4, 4 GB Kingston Ram This would run Windows Web Server 2008 R2 x64 and potentially also Sql Server Web 2008 and SmarterMail server. Given that we already pay for a high-end VPS for development, testing, shared version control we'd like to avoid going with two servers for production. We'd like to avoid using shared sql server hosting and have thought of using the development server as the database server as an option too - but potentially a security risk due to use for development by internal and contract users. The questions are: - Do you feel there would be performance degradation by running this on the same machine? - Are there significant issues to be concerned about if we do this? We understand that best practice would be to run separate db and app servers but the volume of traffic is currently not that high and adding another server just for database is currently too costly. - What are others doing out there? Alternatively, would you recommend instead going with two separate VPS servers with 2GB RAM each on Hyper-v which would be about the same cost as the single dedicated server above? Thanks!

    Read the article

  • Can't successfully run Sharepoint Foundation 2010 first time configuration

    - by Robert Koritnik
    I'm trying to run the non-GUI version of configuration wizard using power shell because I would like to set config and admin database names. GUI wizard doesn't give you all possible options for configuration. I run this command: New-SPConfigurationDatabase -DatabaseName "Sharepoint2010Config" -DatabaseServer "developer.pleiado.pri" -AdministrationContentDatabaseName "Sharepoint2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureString "%h4r3p0int" -AsPlainText -Force) Of course all these are in the same line. I've broken them down into separate lines to make it easier to read. When I run this command I get this error: New-SPConfigurationDatabase : Cannot connect to database master at SQL server a t developer.pleiado.pri. The database might not exist, or the current user does not have permission to connect to it. At line:1 char:28 + New-SPConfigurationDatabase <<<< -DatabaseName "Sharepoint2010Config" -Datab aseServer "developer.pleiado.pri" -AdministrationContentDatabaseName "Sharepoint 2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureS tring "%h4r3p0int" -AsPlainText -Force) + CategoryInfo : InvalidData: (Microsoft.Share...urationDatabase: SPCmdletNewSPConfigurationDatabase) [New-SPConfigurationDatabase], SPExcep tion + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletNewSPCon figurationDatabase I created two domain accounts: SPF_DATABASE - database account SPF_ADMIN - farm account I'm running powershell console as domain administrator. I've tried to run SQL Management studio as domain admin and created a dummy database and it worked wothout a problem. I'm running: Windows 7 x64 on the machine where Sharepoint Foundation 2010 should be installed and also has preinstalled SQL Server 2008 R2 Windows Server 2008 R2 Server Core is my domain controller I've installed Sharepoint according to MS guides http://msdn.microsoft.com/en-us/library/ee554869%28office.14%29.aspx installing all additional patches that are related to my configuration. Any ideas what should I do to make it work?

    Read the article

  • Exchange 2010 add mailbox server to DAG error

    - by Michael
    Hello, i'm having some problems when adding a second mailbox server to my DAG in Exchange 2010. The test setup goes like this: 1x windows server 2008 (DC/DNS) 2x windows server 2008 (Exchange 2010) I have made sure all services are up and running and that the "Exchange Trusted Subsystem" account is set as a local admin. When i create a DAG i can add the first mailbox server (A) without any problems, but when i go to add the second (B) it gives me an error saying "Unable to contact the Cluster service on 1 other members (member) of the Database availability group. It does the same if i add (B) first and then try to add (A). Here is a part of the log file: [2010-04-05T15:00:27] GetRemoteCluster() for the mailbox server failed with exception = An Active Manager operation failed. Error: An error occurred while attempting a cluster operation. Error: Cluster API '"OpenCluster(EXCHANGE20102.area51.com) failed with 0x6d9. Error: There are no more endpoints available from the endpoint mapper"' failed.. This is OK. [2010-04-05T15:00:27] Ignoring previous error, as it is acceptable if the cluster does not exist yet. [2010-04-05T15:00:27] DumpClusterTopology: Opening remote cluster AREA51DAG01. [2010-04-05T15:00:27] DumpClusterTopology: Failed opening with Microsoft.Exchange.Cluster.Replay.AmClusterApiException: An Active Manager operation failed. Error: An error occurred while attempting a cluster operation. Error: Cluster API '"OpenCluster(AREA51DAG01.area51.com) failed with 0x5. Error: Access is denied"' failed. --- System.ComponentModel.Win32Exception: Access is denied --- End of inner exception stack trace --- Any help would be really appreciated, thanks.

    Read the article

  • Drop outs when accessing share by DFS name.

    - by Stephen Woolhead
    I have a strange problem, aren't they all! I have a DFS root \domain\files\vms, it has a single target on a different server than the namespace. I can copy a test file set from the target directly via \server\vms$\testfiles and all is well, the files copy fine. I have repeated these tests many times. If I try and copy the files from the dfs root I get big pauses in the network traffic, about 50 seconds every couple of minutes, all the traffic just stops for the copy. If I start another copy between the same two machines during this pause, it starts copying fine, so I know it's not an issue with the disks on the server. Every once in a while the copy will fail, no errors, the progress bar will just zip all the way to 100% and the copy dialog will close. Checking the target folder show that the copy is incomplete. I've moved the LUN to another server and had the same problem. The servers are all 2008 R2, the clients are Vista x64, Windows7 x64 and 2008 R2, all have the same problem. Anyone got any ideas? Cheers, Stephen More Information: I've been running a NetMon trace on the connection when the file copy fails and what seems to be standing out is that when opening a file that the copy completes on the SMB command looks like this: SMB2: C CREATE (0x5), Name=Training\PDC2008\BB34 Live Services Notifications, Awareness, and Communications.wmv@#422082, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 245376 SMB2: R CREATE (0x5), Context=MxAc, Context=RqLs, Context=DHnQ, Context=QFid, FID=0xFFFFFFFF00000015, Mid = 245376 But for the last file when the copy dialog closes looks like this: SMB2: C CREATE (0x5), Name=gt\files\Media\Training\PDC2008\BB36 FAST Building Search-Driven Portals with Microsoft Office SharePoint Server 2007 and Microsoft Silverlight.wmv@#859374, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 77 SMB2: R , Mid = 77 - NT Status: System - Error, Code = (58) STATUS_OBJECT_PATH_NOT_FOUND The main difference seems to be in the name, one is relative to the open file share, the other has gained the gt\files\media prefix which is the name of the DFS target. These failures are always preceded by logoff and back on of the SMB target. Might have to bump this one to PSS.

    Read the article

  • ASP.Net application can no longer write to DB after having run out of disk space

    - by remi.despres-smyth
    I'm a software developer troubleshooting a sticky problem on a client's production server, and I've got a bit of a problem. They have a virtual server running Windows Server 2008, SQL Server 2008 R1 and IIS7. It was provisioned with two partitions: one that has the OS (~15 Gig), and the other has IIS' web sites (another ~15 Gig). My application that's running this server has been running perfectly well, up until about an hour ago, when it started throwing System.IO.IOException: "There is not enough space on disk". As soon as my client notified me, I cleared up some space on C:\, emptied the recycle bin, and restarted SQL Server and IIS. The web server came back up and the application was running, but it no longer saves information to the database. No error message is coming up, the application can get information out of the DB, but it can no longer save data back to it. I rebooted the server, to no effect. I spoke with a sys admin at the hosting company, and he says SQL Server appears to have come up fine and the database is not in read-only mode. I confirmed that, as I can add records to tables from SQL Server Management Studio. I looked at the event log immediately after trying to save an edited record in the app, and no new events appear in there that I can tell. I'm assuming this is related to having run out of space, as it was all working fine prior to that, but I'm at a bit of a loss as to what exactly needs a kick in the pants to get going again. Can anyone help me out? What the heck is going on here?

    Read the article

  • Why are my Windows 7 updates continuously failing?

    - by Chris C.
    I'm an advanced level user here with an odd issue. I have two Windows Updates that are failing to install, every single time. I'm getting a mysterious "Code 1" error on both updates, an error for which I'm having difficulty finding a solution. The updates in question are: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Because these updates are failing, the Shut Down button in my start menu always has the shield icon next to it, indicating that "new" updates will be installed on shut down. But, of course, they'll fail and when the PC is restarted, the shield icon is still there. When checking the update history and viewing the details of the failed updates, I get the following: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) Installation date: ?6/?29/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important A security issue has been identified leading to MFC application vulnerability in DLL planting due to MFC not specifying the full path to system/localization DLLs. You can protect your computer by installing this update from Microsoft. After you install this item, you may have to restart your computer. More information: http://go.microsoft.com/fwlink/?LinkId=216803 System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Installation date: ?6/?28/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important This tool is being offered because an inconsistency was found in the Windows servicing store which may prevent the successful installation of future updates, service packs, and software. This tool checks your computer for such inconsistencies and tries to resolve issues if found. More information: http://support.microsoft.com/kb/947821 About My System I'm running Windows 7 Home Premium x64 Edition. This is a custom PC build and the OS was installed fresh, not an upgrade from a previous version. I've been running this system for about 4 months. Windows Updates aside, the system is usually quite stable. Thanks in advance for your help!

    Read the article

  • Where should CentOS users get /usr/share/virtio-win/drivers for virt-v2v?

    - by Philip Durbin
    I need to migrate a number of virtual machines from VMware ESX to CentOS 6 KVM hypervisors. Ultimately, I wrote an RPM spec file that solved my problem at https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec but I'm not sure if there's another RPM in base CentOS or EPEL (something standard) I should be using instead. Originally, I was getting this "No root device found in this operating system image" error when attemting to migrate a Window 2008 VM. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm virt-v2v: No root device found in this operating system image. . . . but I solved this with a simply yum install libguestfs-winsupport since the docs say: If you attempt to convert a virtual machine using NTFS without the libguestfs-winsupport package installed, the conversion will fail. Next I got an error about missing drivers for Windows 2008. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: Installation failed because the following files referenced in the configuration file are required, but missing: /usr/share/virtio-win/drivers/amd64/Win2008 . . . and I resolved this by grabbing an iso from Fedora at http://alt.fedoraproject.org/pub/alt/virtio-win/latest/ as recommended by http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers and building an RPM from it with this spec file: https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec Now, virt-v2v exits without error: [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: my-vm configured with virtio drivers. [root@kvm01b ~]# Now, my question is, rather that the virtio-win RPM from the spec file I wrote, is there some other more standard RPM in base CentOS or EPEL that will resolve the error above? Here's a bit more detail about my setup: [root@kvm01b ~]# cat /etc/redhat-release CentOS release 6.2 (Final) [root@kvm01b ~]# rpm -q virt-v2v virt-v2v-0.8.3-5.el6.x86_64 See also Bug 605334 – VirtIO driver for windows does not show specific OS: Windows 7, Windows 2003

    Read the article

  • Can't successfully run Sharepoint Foundation 2010 first time configuration

    - by Robert Koritnik
    I'm trying to run the non-GUI version of configuration wizard using power shell because I would like to set config and admin database names. GUI wizard doesn't give you all possible options for configuration (but even though it doesn't do it either). I run this command: New-SPConfigurationDatabase -DatabaseName "Sharepoint2010Config" -DatabaseServer "developer.mydomain.pri" -AdministrationContentDatabaseName "Sharepoint2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureString "%h4r3p0int" -AsPlainText -Force) Of course all these are in the same line. I've broken them down into separate lines to make it easier to read. When I run this command I get this error: New-SPConfigurationDatabase : Cannot connect to database master at SQL server a t developer.mydomain.pri. The database might not exist, or the current user does not have permission to connect to it. At line:1 char:28 + New-SPConfigurationDatabase <<<< -DatabaseName "Sharepoint2010Config" -Datab aseServer "developer.mydomain.pri" -AdministrationContentDatabaseName "Sharepoint 2010Admin" -DatabaseCredentials (Get-Credential) -Passphrase (ConvertTo-SecureS tring "%h4r3p0int" -AsPlainText -Force) + CategoryInfo : InvalidData: (Microsoft.Share...urationDatabase: SPCmdletNewSPConfigurationDatabase) [New-SPConfigurationDatabase], SPExcep tion + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletNewSPCon figurationDatabase I created two domain accounts and haven't added them to any group: SPF_DATABASE - database account SPF_ADMIN - farm account I'm running powershell console as domain administrator. I've tried to run SQL Management studio as domain admin and created a dummy database and it worked without a problem. I'm running: Windows 7 x64 on the machine where Sharepoint Foundation 2010 should be installed and also has preinstalled SQL Server 2008 R2 database Windows Server 2008 R2 Server Core is my domain controller that just serves domain features and nothing else I've installed Sharepoint according to MS guides http://msdn.microsoft.com/en-us/library/ee554869%28office.14%29.aspx installing all additional patches that are related to my configuration. Any ideas what should I do to make it work?

    Read the article

  • Apache 2.2, worker mpm, mod_fcgid and PHP: Can't apply process slot

    - by mopoke
    We're having an issue on an apache server where every 15 to 20 minutes it stops serving PHP requests entirely. On occasions it will return a 503 error, other times it will recover enough to serve the page but only after a delay of a minute or more. Static content is still served during that time. In the log file, there's errors reported along the lines of: [Wed Sep 28 10:45:39 2011] [warn] mod_fcgid: can't apply process slot for /xxx/ajaxfolder/ajax_features.php [Wed Sep 28 10:45:41 2011] [warn] mod_fcgid: can't apply process slot for /xxx/statics/poll/index.php [Wed Sep 28 10:45:45 2011] [warn] mod_fcgid: can't apply process slot for /xxx/index.php [Wed Sep 28 10:45:45 2011] [warn] mod_fcgid: can't apply process slot for /xxx/index.php There is RAM free and, indeed, it seems that more php processes get spawned. /server-status shows lots of threads in the "W" state as well as some FastCGI processes in "Exiting(communication error)" state. I rebuilt mod_fcgid from source as the packaged version was quite old. It's using current stable version (2.3.6) of mod_fcgid. FCGI config: FcgidBusyScanInterval 30 FcgidBusyTimeout 60 FcgidIdleScanInterval 30 FcgidIdleTimeout 45 FcgidIOTimeout 60 FcgidConnectTimeout 20 FcgidMaxProcesses 100 FcgidMaxRequestsPerProcess 500 FcgidOutputBufferSize 1048576 System info: Linux xxx.com 2.6.28-11-server #42-Ubuntu SMP Fri Apr 17 02:45:36 UTC 2009 x86_64 GNU/Linux DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.04 DISTRIB_CODENAME=jaunty DISTRIB_DESCRIPTION="Ubuntu 9.04" Apache info: Server version: Apache/2.2.11 (Ubuntu) Server built: Aug 16 2010 17:45:55 Server's Module Magic Number: 20051115:21 Server loaded: APR 1.2.12, APR-Util 1.2.12 Compiled using: APR 1.2.12, APR-Util 1.2.12 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf" Apache modules loaded: alias.load auth_basic.load authn_file.load authz_default.load authz_groupfile.load authz_host.load authz_user.load autoindex.load cgi.load deflate.load dir.load env.load expires.load fcgid.load headers.load include.load mime.load negotiation.load rewrite.load setenvif.load ssl.load status.load suexec.load PHP info: PHP 5.2.6-3ubuntu4.6 with Suhosin-Patch 0.9.6.2 (cli) (built: Sep 16 2010 19:51:25) Copyright (c) 1997-2008 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2008 Zend Technologies

    Read the article

  • How to redirect all Internet traffic to OpenVPN Server

    - by JuliaS
    I have seen working solutions around the issue of forcing Internet traffic to go through the OpenVPN server but they are all done in Linux, all I want to know is how to add an entry to the route table in windows to make this happen. connectivity between the client and server is fine, my Windows 7 client can establish a connection to the Windows 2008 Server, but when established Internet traffic is still going from the local Windows 7 machine. Here are the details: Server: Windows 2008 Server with one NIC OpenVPN IP Address: 192.168.0.1 Local NIC IP Address (connects the server to the Internet): 10.242.69.107 Client: Windows 7 with one NIC OpenVPN IP Address: 192.168.0.2 ISP allocated IP Address: 10.0.8.2 (gateway 10.0.8.1) Server OpenVPN Config: dev tun ifconfig 192.168.0.1 192.168.0.2 secret static.key push "redirect-gateway def1" Client OpenVPN Config: remote xxx.xxx.com dev tun ifconfig 192.168.0.2 192.168.0.1 secret static.key I'm not an expert with adding routes...etc. I would be grateful if someone could let me know how to add this entry in my server/client route table. EDIT: Output from the client's netstat -rnv IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.0.8.1 10.0.8.2 20 10.0.8.0 255.255.255.252 On-link 10.0.8.2 276 10.0.8.2 255.255.255.255 On-link 10.0.8.2 276 10.0.8.3 255.255.255.255 On-link 10.0.8.2 276 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.0.0 255.255.255.252 On-link 192.168.0.2 286 192.168.0.2 255.255.255.255 On-link 192.168.0.2 286 192.168.0.3 255.255.255.255 On-link 192.168.0.2 286 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.0.8.2 276 224.0.0.0 240.0.0.0 On-link 192.168.0.2 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.0.8.2 276 255.255.255.255 255.255.255.255 On-link 192.168.0.2 286 ===========================================================================

    Read the article

  • Does WebDAV even work on IIS 7? I say nay

    - by FlavorScape
    I've tried every configuration from the top 10 stack overflow and server fault results for WebDAV 405 on IIS (for verb PROPFIND and PUT). I'm running server 2008 SP2. Followed all the instructions here. I'm no stranger to configuring servers. This has gotten nowhere after 8 hours. Confirmed system.webserver in applicationhost.config: <add name="WebDAV" path="*" verb="PROPFIND,PROPPATCH,MKCOL,PUT,COPY,DELETE,MOVE,LOCK,UNLOCK" modules="WebDAVModule" resourceType="Unspecified" requireAccess="None" /> Port 443 with basic auth, same issue. Tried port 80 with windows auth. Broken. (405) Windows authentication. Check. Added authoring rules for default site and application. Check. Not the firewall. Check. added "Desktop Experience" role feature Tried HTTPS with Basic Authentication on port 443. Does not work. No other services are running like Sharepoint. Check. confirmed user has read/write NT level permissions for the folder/virtual dir tried net use * http://localhost /user:MYDOMAIN\me myPass get error 1920, if I don't authenticate I get error 67 confirmed I'm not applying filtering to WebDAV: <requestFiltering> <fileExtensions applyToWebDAV="false" /> <verbs applyToWebDAV="false" /> <hiddenSegments applyToWebDAV="false" /> 405 - HTTP verb used to access this page is not allowed. The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt access. SHOULD I JUST GIVE UP? Other questions that helped none: 405 - ‘Method not Allowed’ adding service hosted in IIS7 webdav on iis7.5 - simply cannot make it work http://studentguru.gr/b/kingherc/archive/2009/11/21/webdav-for-iis-7-on-windows-server-2008-r2.aspx

    Read the article

  • SQL Server Express backup/restore error: The Media Family on Device is Incorrectly Formed.

    - by Chris
    Basically, I'm having this issue: http://www.sqlcoffee.com/Troubleshooting047.htm What I'm doing is running a script I found online (http://pastebin.com/3n0ZfybL) to do a full backup, then rar'ing up the file and moving it to my computer. The CRC of the backup file inside the rar is correct on both computers, so there is no problem with data being corrupted when I transfer it. But then I go and try to restore the database on my dev computer here and I get the errors "sql server cannot process this media family" ... "msg 3013". Why is this happening? I'd test out the backup on the server I'm getting it from, but it's a production server. Edit: I was about to say how I wasn't doing anything stupid like trying to restore the database to an earlier version of SQL Server, but apparently I am: From: Microsoft SQL Server 2008 (SP1) - 10.0.2531.0 (Intel X86) Mar 29 2009 10:27:29 Copyright (c) 1988-2008 Microsoft Corporation Express Edition on Windows NT 6.0 (Build 6002: Service Pack 2) To: Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Feb 9 2007 22:47:07 Copyright (c) 1988-2005 Microsoft Corporation Express Edition on Windows NT 6.0 (Build 6001: Service Pack 1) Let me get back to this post after I reinstall this.

    Read the article

  • DFS replication and the SYSTEM user (NTFS permissions)

    - by HopelessN00b
    Question for which I'm having trouble finding an answer on Google or Technet... Does granting the SYSTEM user permissions to DFS-shared files and folders have any effect on DFS replication? (And while we're at it, is there any good reason not to let SYSTEM have permissions to DFS-shared files?) It comes up because I have a collection of DFS namespaces and folders that I'm not able to make someone else's problem, and while troubleshooting a problem where one DFS replica just wasn't replicating with another for no discernible reason, I observed that the SYSTEM didn't have any permissions granted to any of the files or folders in the folder in question. So I set SYSTEM to have full control and propagated it down, and our DFS health diagnostic reports went from showing a backlog of ~80 files to a backlog of ~100,000... and things started replicating, including a number of files that had been missing on one server or the other (so more than just the permissions changes started replicating). Naturally, this made me curious as to whether or not DFS needs the SYSTEM account to have permissions to do its work, or if perhaps it was just any change to folder tree in question that prompted DFS to jump into action. If it matters, our DFS namespaces were set up under 2000/2003, and I have just recently finished upgrading all the servers to 2008 R2 or 2012 (with UAC enabled, blech), but have not yet gotten around to raising the DFS namespace functional levels to Server 2008. (And bonus points if anyone has an official Microsoft article on NTFS file permissions and the SYSTEM account as it pertains to DFS or network files.)

    Read the article

  • xampp apache on windows 7 returns http header only

    - by bumperbox
    i am having issues with xampp running on windows 7 RC32 i type in a localhost and get a header back only, no page content somedays it works fine, other days i can't get it to work after multiple attempts, reboot or otherwise the request doesn't even get put into the acccess log which seems unusual here is the log file at startup incase that helps any ideas ?? [Wed Sep 09 12:27:08 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:08 2009] [notice] Digest: done [Wed Sep 09 12:27:09 2009] [notice] Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9 configured -- resuming normal operations [Wed Sep 09 12:27:09 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Sep 09 12:27:09 2009] [notice] Parent: Created child process 2500 [Wed Sep 09 12:27:10 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:10 2009] [notice] Digest: done [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Child process is running [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Acquired the start mutex. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting 250 worker threads. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting thread to listen on port 443. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting thread to listen on port 80. [Wed Sep 09 12:27:15 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Sep 09 12:27:15 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:15 2009] [notice] Digest: done [Wed Sep 09 12:27:16 2009] [notice] Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9 configured -- resuming normal operations [Wed Sep 09 12:27:16 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Sep 09 12:27:16 2009] [notice] Parent: Created child process 3252 [Wed Sep 09 12:27:17 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:17 2009] [notice] Digest: done [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Child process is running [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Acquired the start mutex. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting 250 worker threads. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting thread to listen on port 443. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting thread to listen on port 80.

    Read the article

  • Networking problems in VMWare with wireless bridge

    - by Robert Koritnik
    Barebone data: virtualization: VMWare Workstation 6.5 (latest) Host: Windows Server 2008 x64 Guest: Windows Server 2008 x86 Host network adapter: wireless Guest network adapter 1: over Bridge VMNet (automatic) Guest network adapter 2: over Host only VMNet Problem When I surf the net within VM my internet connection just gets stalled (not dropped). It doesn't experience any timeout whatsoever, it just stops downloading/communicating. For instance: I start downloading a file with a browser (IE/FF/CR doesn't matter) and I have to pause/restart download when speed drops to 0. I could wait indefinitelly but connection won't pickup automatically. What did I miss in my network configuration? Update 1 I've tested this in various combinations. This works fine when host is connected via Ethernet. But when connected via Wifi, the connection on the guest works as previously described. It connects fine. It gets a valid IP from DHCP... Everything is cool as long as you don't start doing some intensive network traffic (ie. download a 2MB file) In this case it starts downloading and stops after a while. Speed just drops to 0B/s... Sometimes it picks up back, sometimes it doesn't. Connection still stays and works. I can ping around with no problem.

    Read the article

  • Is 30 calls / second a lot for one IIS server?

    - by Lieven Cardoen
    We have a RIA application that 300 clients concurrently use in an intranet environment. Together they make 30 calls / second to IIS (asp.net) (actually it's 60 but calls are loadbalanced over two IIS servers). Half of the calls is getting an asset (Caching Profile is used so most of the time cache is hit), the other half is saving data to a sql server. Retrieving an asset is done with a aspx page. Saving the data happens via WebORB, asp.net and Sql Server. So some processing is needed by WebORB (amf decoding, GZIP, ...). We also use Spring.NET, and some of the container objects have a request scope (not a lot). IIS servers -- Virtual machines, 4 CPU, 2 gb RAM. They are based on Windows 2008 x64 SP2 Enterprise Edition. Sql Server 2008 is used. Apparently CPU of both IIS serers is constantly around 60-70%. Now, my question, is the load of 60-70% acceptable and how could we possible bring that down to less % (maybe using only one IIS server)? + Is 2 gb RAM enough? Assets can be up to 20mb, but on average, they are about 30kb. (the load of 60-70% is achieved with assets around 30kb). The data that gets saved with weborb is very small (2kb) and is just one object.

    Read the article

  • Windows Server Hyper-V guests cannot see each other on network

    - by Noldorin
    I have a Hyper-V physical machine along with two standard laptops running within my LAN (connected by an ASUS-RT56U router). The physical server runs Windows Hyper-V Server 2008 R2, with two Windows Server 2008 R2 (full) guest VMs installed and running within. Both laptops run Windows 7. All OSs are 64-bit. Opening up Network in Windows Explorer on either of the two laptops displays both of the laptops in the LAN fine. However, neither of the guest VMs on the server (nor the host itself) are displayed. Indeed, the guest VMs can not see each other in Network view either. I can ping all computers (laptops and servers) without problems from within the LAN, but all of the servers are simply not visible from anywhere. In addition, the Network Map screen (accessible via Network and Sharing centre) gives me an error message: "An error happened during the mapping process." And I'm suspecting this might have something to do with how LLTP (Link Layer Topology Protocol) is working on the network. Worth noting though is that before my server was on the network, the Network Map screen displayed fine (as far as I can remember).

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >