Search Results

Search found 37344 results on 1494 pages for 'sql download'.

Page 478/1494 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • SSAS: Using fake dimension and scopes for dynamic ranges

    - by DigiMortal
    In one of my BI projects I needed to find count of objects in income range. Usual solution with range dimension was useless because range where object belongs changes in time. These ranges depend on calculation that is done over incomes measure so I had really no option to use some classic solution. Thanks to SSAS forums I got my problem solved and here is the solution. The problem – how to create dynamic ranges? I have two dimensions in SSAS cube: one for invoices related to objects rent and the other for objects. There is measure that sums invoice totals and two calculations. One of these calculations performs some computations based on object income and some other object attributes. Second calculation uses first one to define income ranges where object belongs. What I need is query that returns me how much objects there are in each group. I cannot use dimension for range because on one date object may belong to one range and two days later to another income range. By example, if object is not rented out for two days it makes no money and it’s income stays the same as before. If object is rented out after two days it makes some income and this income may move it to another income range. Solution – fake dimension and scopes Thanks to Gerhard Brueckl from pmOne I got everything work fine after some struggling with BI Studio. The original discussion he pointed out can be found from SSAS official forums thread Create a banding dimension that groups by a calculated measure. Solution was pretty simple by nature – we have to define fake dimension for our range and use scopes to assign values for object count measure. Object count measure is primitive – it just counts objects and that’s it. We will use it to find out how many objects belong to one or another range. We also need table for fake ranges and we have to fill it with ranges used in ranges calculation. After creating the table and filling it with ranges we can add fake range dimension to our cube. Let’s see now how to solve the problem step-by-step. Solving the problem Suppose you have ranges calculation defined like this: CASE WHEN [Measures].[ComplexCalc] < 0 THEN 'Below 0'WHEN [Measures].[ComplexCalc] >=0 AND  [Measures].[ComplexCalc] <=50 THEN '0 - 50'...END Let’s create now new table to our analysis database and name it as FakeIncomeRange. Here is the definition for table: CREATE TABLE [FakeIncomeRange] (     [range_id] [int] IDENTITY(1,1) NOT NULL,     [range_name] [nvarchar](50) NOT NULL,     CONSTRAINT [pk_fake_income_range] PRIMARY KEY CLUSTERED      (         [range_id] ASC     ) ) Don’t forget to fill this table with range labels you are using in ranges calculation. To use ranges from table we have to add this table to our data source view and create new dimension. We cannot bind this table to other tables but we have to leave it like it is. Our dimension has two attributes: ID and Name. The next thing to create is calculation that returns objects count. This calculation is also fake because we override it’s values for all ranges later. Objects count measure can be defined as calculation like this: COUNT([Object].[Object].[Object].members) Now comes the most crucial part of our solution – defining the scopes. Based on data used in this posting we have to define scope for each of our ranges. Here is the example for first range. SCOPE([FakeIncomeRange].[Name].&[Below 0], [Measures].[ObjectCount])     This=COUNT(            FILTER(                [Object].[Object].[Object].members,                 [Measures].[ComplexCalc] < 0          )     ) END SCOPE To get these scopes defined in cube we need MDX script blocks for each line given here. Take a look at the screenshot to get better idea what I mean. This example is given from SQL Server books online to avoid conflicts with NDA. :) From previous example the lines (MDX scripts) are: Line starting with SCOPE Block for This = Line with END SCOPE And now it is time to deploy and process our cube. Although you may see examples where there are semicolons in the end of statements you don’t need them. Visual Studio BI tools generate separate command from each script block so you don’t need to worry about it.

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • La préversion du pilote ODBC SQL Server pour Linux disponible, avec des outils pour SQLCMD et pour importer et exporter des données

    La préversion du pilote ODBC SQL Server pour Linux disponible avec des outils pour importer, exporter des données et exécuter des requêtes et scripts Comme l'avait annoncé Microsoft lors de son salon Pass Summit 2011, la préversion (CTP1) de son pilote ODBC (Open Database Connectity) SQL Server pour les systèmes d'exploitation Linux est disponible. Ce driver devrait permettre aux développeurs C et C++ de se connecter à SQL Server à partir des plateformes Linux et fournir un mécanisme pour que les applications et bibliothèques d'exécution (runtimes) puissent accéder à SQL Server. Pour l'instant, cett...

    Read the article

  • What is the preferred tool/approach to putting a SQL Server database under source control?

    - by msigman
    I've evaluated RedGate SQL Source Control tool (http://www.red-gate.com/products/sql-development/sql-source-control/), and I believe that Team Foundation Server 2010 offers a way to do this as well (as touched on here http://blog.discountasp.net/using-team-foundation-server-2010-source-control-from-sql-server-management-studio/). Are there alternatives, or is one of these considered the preferred/standard solution?

    Read the article

  • WebClient on WP7 - Throw "A request with this method cannot have a request body"

    - by Peter Hansen
    If I execute this code in a Consoleapp it works fine: string uriString = "http://url.com/api/v1.0/d/" + Username + "/some?amount=3&offset=0"; WebClient wc = new WebClient(); wc.Headers["Content-Type"] = "application/json"; wc.Headers["Authorization"] = AuthString.Replace("\\", ""); string responseArrayKvitteringer = wc.DownloadString(uriString); Console.WriteLine(responseArrayKvitteringer); But if I move the code to my WP7 project like this: string uriString = "http://url.com/api/v1.0/d/" + Username + "/some?amount=3&offset=0"; WebClient wc = new WebClient(); wc.Headers["Content-Type"] = "application/json"; wc.Headers["Authorization"] = AuthString.Replace("\\", ""); wc.DownloadStringCompleted += new DownloadStringCompletedEventHandler(wc_DownloadStringCompleted); wc.DownloadStringAsync(new Uri(uriString)); void wc_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { MessageBox.Show(e.Result); } I got the exception: A request with this method cannot have a request body. Why? The solution is to remove the Content-type: string uriString = "http://url.com/api/v1.0/d/" + Username + "/some?amount=3&offset=0"; WebClient wc = new WebClient(); //wc.Headers["Content-Type"] = "application/json"; wc.Headers["Authorization"] = AuthString.Replace("\\", ""); wc.DownloadStringCompleted += new DownloadStringCompletedEventHandler(wc_DownloadStringCompleted); wc.DownloadStringAsync(new Uri(uriString)); void wc_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { MessageBox.Show(e.Result); }

    Read the article

  • Unable to determine the provider name for connection of type 'System.Data.SqlServerCe.SqlCeConnectio

    - by Hobbes1312
    Hi, i am using the Ado.Net Entity Framework with Code Only (Tutorial at: ADO.NET team blog) and i want to be as much database independent as possible. In my first approach i just want to go for Sql Express and Sql Compact databases. With Sql Express everthing works fine but with Sql Compact i get the exception mentioned in my question. Does anybody knows if it is possible to connect to Sql Compact with the Code Only approach? (with a generated .edmx file for a Sql Compact database everthing works fine, but i want to use code only!) Here is some code: My Class which is building the DataContext: public class DataContextBuilder : IDataContextBuilder { private readonly DbProviderFactory _factory; public DataContextBuilder(DbProviderFactory factory) { _factory = factory; } #region Implementation of IDataContextBuilder public IDataContext CreateDataContext(string connectionString) { var builder = new ContextBuilder<DataContext>(); RegisterConfiguration(builder); var connection = _factory.CreateConnection(); connection.ConnectionString = connectionString; var ctx = builder.Create(connection); return ctx; } #endregion private void RegisterConfiguration(ContextBuilder<DataContext> builder) { builder.Configurations.Add(new PersonConfiguration()); } } The line var ctx = builder.Create(connection); is throwing the exception. The IDataContext is just a simple Interface for the ObjectContext: public interface IDataContext { int SaveChanges(); IObjectSet<Person> PersonSet { get; } } My connection string is configured in the app.config: <connectionStrings> <add name="CompactConnection" connectionString="|DataDirectory|\Test.sdf" providerName="System.Data.SqlServerCe.3.5" /> </connectionStrings> And the build action is started with var cn = ConfigurationManager.ConnectionStrings["CompactConnection"]; var factory = DbProviderFactories.GetFactory(cn.ProviderName); var builder = new DataContextBuilder(factory); var context = builder.CreateDataContext(cn.ConnectionString);

    Read the article

  • Why am I getting warnings about missing DLLs when adding a node to a WSFC?

    - by Stuart Branham
    We're getting the following two errors when adding a node to our WSFC. The node was added successfully, but the 'SQL Server Availability Group' resource type could not be installed on it. Unable to find 'hadrres.dll' on any of the cluster nodes. The node was added successfully, but the 'SQL Server FILESTREAM Share' resource type could not be installed on it. Unable to find 'fssres.dll' on any of the cluster nodes. This cluster is going to host an AlwaysOn Availability Group. SQL Server 2012 is installed on both nodes, and availability groups are enabled on both. Filestream access is also configured on both. Another curious thing I'm seeing is that my instance on the second node doesn't appear in Configuration Manager. Anyone know what may be going on here?

    Read the article

  • Cannot find one or more components. Please reinstall application

    - by Chris
    I am running Windows 8, 64 bit and have SQL Server 2012 installed. I downloaded the client tools, looked in the directory for SQL Server Management Studio, and see it's there. When I try to run SQL Management Studio I receive the error message: "Cannot find one or more components. Please reinstall application". This problem just started. I have reinstalled the application and downloaded the service packs. The shortcut key shows the path but it still will not run.

    Read the article

  • Restore DB - Error RESTORE HEADERONLY is terminating abnormally.

    - by Jordon
    I have taken backup of SQL Server 2008 DB on server, and download them to local environment. I am trying to restore that database and it is keep on giving me following error. An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ ADDITIONAL INFORMATION: The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=09.00.4053&EvtSrc=MSSQLServer&EvtID=3241&LinkId=20476 I have tried to create a temp DB on server and tried to restore the same backup file and that works. I have also tried no. of times downloading file from server to local pc using different options on Filezila (Auto, Binary) But its not working. After that I tried to execute following command on server. BACKUP DATABASE go4sharepoint_1384_8481 TO DISK=' C:\HostingSpaces\dbname_jun14_2010_new.bak' with FORMAT And it is giving me following error. Msg 3201, Level 16, State 1, Line 1 Cannot open backup device 'c:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\Backup\ C:\HostingSpaces\dbname_jun14_2010_new.bak'. Operating system error 123(The filename, directory name, or volume label syntax is incorrect.). Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally. After researching I found following 2 useful links 1) http://support.microsoft.com/kb/290787 2) http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/thread/4d5836f6-be65-47a1-ad5d-c81caaf1044f But still I can't able to restore Database correctly. Any help would be much appreciated thanks.

    Read the article

  • How can I stop SQL Server Reporting Services 2008 going to sleep?

    - by Nick
    I have SSRS 2008 set-up on a server. All works fine except that if left inactive for a length of time the next time a request is made to the server it takes a long time for it to service it. I think this is to do with the worker process being shutdown after being idle for a certain length of time. However, as SSRS 2008 isn't managed through IIS I can't find any settings that I can adjust to stop this from happening. In IIS I'd go to the Performance tab of the Application Pool Properties and choose not to shutdown the worker process. How can I do this for SSRS 2008?

    Read the article

  • File upload and Download in a web app using struts2

    - by lakshmanan
    Hi In struts2 upload methods, can I choose where the uploaded file must be saved. I mean, all the examples in web ask me to store in WEB-INF which surely is not a good idea. I want to be able to store the uploaded file in any place in my disk. How should i do it? Can i do it with help of ServletContextAware interceptor ?

    Read the article

  • Can Sql Server BULK INSERT read from a named pipe/fifo?

    - by Peter
    Is it possible for BULK INSERT/bcp to read from a named pipe, fifo-style? That is, rather than reading from a real text file, can BULK INSERT/bcp be made to read from a named pipe which is on the write end of another process? For example: create named pipe unzip file to named pipe read from named pipe with bcp or BULK INSERT or: create 4 named pipes split 1 file into 4 streams, writing each stream to a separate named pipe read from 4 named pipes into 4 tables w/ bcp or BULK INSERT The closest I've found was this fellow (site now unreachable), who managed to write to a named pipe w/ bcp, with a his own utility and usage like so: start /MIN ZipPipe authors_pipe authors.txt.gz 9 bcp pubs..authors out \\.\pipe\authors_pipe -T -n But he couldn't get the reverse to work. So before I head off on a fool's errand, I'm wondering whether it's fundamentally possible to read from a named pipe w/ BULK INSERT or bcp. And if it is possible, how would one set it up? Would NamedPipeServerStream or something else in the .NET System.IO.Pipes namespace be adequate? eg, an example using Powershell: [reflection.Assembly]::LoadWithPartialName("system.core") $pipe = New-Object system.IO.Pipes.NamedPipeServerStream("Bob") And then....what?

    Read the article

  • How to mass insert/update in linq to sql?

    - by chobo2
    Hi How can I do these 2 scenarios. Currently I am doing something like this public class Repository { private LinqtoSqlContext dbcontext = new LinqtoSqlContext(); public void Update() { // find record // update record // save record ( dbcontext.submitChanges() } public void Insert() { // make a database table object ( ie ProductTable t = new ProductTable() { productname ="something"} // insert record ( dbcontext.ProductTable.insertOnSubmit()) // dbcontext.submitChanges(); } } So now I am trying to load an XML file what has tons of records. First I validate the records one at a time. I then want to insert them into the database but instead of doing submitChanges() after each record I want to do a mass submit at the end. So I have something like this public class Repository { private LinqtoSqlContext dbcontext = new LinqtoSqlContext(); public void Update() { // find record // update record } public void Insert() { // make a database table object ( ie ProductTable t = new ProductTable() { productname ="something"} // insert record ( dbcontext.ProductTable.insertOnSubmit()) } public void SaveToDb() { dbcontext.submitChanges(); } } Then in my service layer I would do like for(int i = 0; i < 100; i++) { validate(); if(valid == true) { update(); insert() } } SaveToDb(); So pretend my for loop is has a count for all the record found in the xml file. I first validate it. If valid then I have to update a table before I insert the record. I then insert the record. After that I want to save everything in one go. I am not sure if I can do a mass save when updating of if that has to be after every time or what. But I thought it would work for sure for the insert one. Nothing seems to crash and I am not sure how to check if the records are being added to the dbcontext.

    Read the article

  • Execute Backup-SqlDatabase cmdlet remotely

    - by Maxim V. Pavlov
    When I run the following script line locally on an SQL Server machine, it executes perfectly: Backup-SqlDatabase -ServerInstance $serverName -Database $sqldbname -BackupFile "$($backupFolder)$($dbname)_db_$($addinionToName).bak" $serverName contains a short name of the SQL Server instance. SQL Server is 2012, so these new cmdlets work like a charm. On the other hand, when I am trying to perform a DB backup from a TeamCity agent machine like this (Through Invoke-Command cmdlet): function BackupDB([String] $serverName, [String] $sqldbname, [String] $backupFolder, [String] $addinionToName) { Import-Module SQLPS -DisableNameChecking Backup-SqlDatabase -ServerInstance $serverName -Database $sqldbname -BackupFile "$($backupFolder)$($dbname)_db_$($addinionToName).bak" } Invoke-Command -computername $SQLComputerName -Credential $credentials -ScriptBlock ${function:BackupDB} -ArgumentList $SQLInstanceName, $DatabaseName, $BackupDirectory, $BakId results in an error: Failed to connect to server $serverName. + CategoryInfo : NotSpecified: (:) [Backup-SqlDatabase], ConnectionFailureException + FullyQualifiedErrorId : Microsoft.SqlServer.Management.Common.ConnectionFailureException,Microsoft.SqlServer.M anagement.PowerShell.BackupSqlDatabaseCommand What is the correct way to execute Backup-SqlDatabase cmdlet remotely?

    Read the article

  • How many inserts can you have in a sql transaction.

    - by Mav
    I have a task to do that will require me using a transaction to ensure that many inserts will be completed or the entire update rolled back. I am concerned about the amount of data that needs to be inserted in this transaction and whether this will have a negative affect on the server. We are looking at about 10,000 records in table1 and 60,0000 records into table2. Is this safe to do in a single transaction?

    Read the article

  • JPQL / SQL: How to select * from a table with group by on a single column?

    - by DavidD
    I would like to select every column of a table, but want to have distinct values on a single attribute of my rows (City in the example). I don't want extra columns like counts or anything, just a limited number of results, and it seems like it is not possible to directly LIMIT results in a JPQL query. Original table: ID | Name | City --------------------------- 1 | John | NY 2 | Maria | LA 3 | John | LA 4 | Albert | NY Wanted result, if I do the distinct on the City: ID | Name | City --------------------------- 1 | John | NY 2 | Maria | LA What is the best way to do that? Thank you for your help.

    Read the article

  • [C#] A problem of downloading webpage's HTML source.

    - by Nam Gi VU
    I use System.Net.WebClient.DownloadString(url) to get the HTML source of http://kqxs.vn but what I recieved is a caution text from the web server which says in Vietnamese as: "Xin loi. Chung toi khong the dap ung yeu cau truy cap cua ban... Vui long lien he : [email protected]. Chao ban" which is translated in English as "Sorry. We cannot response to your request... Please contact... Good bye." This is strange because when I use a WebControl to get the HTML ( by calling .Navigate(url) and then .DocumentText), I receive the different HTML codes - which in turn is exactly what I see when open the website by Firefox & view the source code from Firefox. I read DownloadData() is downloading source that is completely wrong. Source view in Firefox different than that downloaded. - Stack Overflow and found the answer to my symptom. But I don't know how to set the User-Agent. Please help.

    Read the article

  • LINQ to SQL - Save an entity without creating a new DataContext?

    - by aximili
    I get this error Cannot add an entity with a key that is already in use when I try to save an Item [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(Item item) { Global.DataContext.Items.Attach(item); Global.DataContext.SubmitChanges(); return View(item); } That's because I cannot attach the item to the static global DataContext. Is it possible to save an item without creating a new DataContext? (I am very new to LINQ)

    Read the article

  • Response.TransmitFile() with UNC share (ASP.NET)

    - by frankadelic
    In the comments of this page: http://msdn.microsoft.com/en-us/library/12s31dhy.aspx ..it says that TransmitFile() cannot be used with UNC shares. As far as I can tell, this is the case; I get this error in Event Log when I attempt it: TransmitFile failed. File Name: \\myshare1\e$\file.zip, Impersonation Enabled: 0, Token Valid: 1, HRESULT: 0x8007052e The suggested alternative is to use WriteFile(), however, this is problematic because it loads the file into memory. In my application, the files are 200MB, so this is not going to scale. Is there a method in ASP.NET for streaming files to users that's: scalable (doesn't read entire file into RAM or occupy ASP.NET threads) works with UNC shares Mapping a network drive as a virtual directory is not an option for us. I would like to avoid copying the file to the local web server as well. Thanks

    Read the article

  • Linq-to-sql Compiled Query returns object NOT belonging to submitted DataContext ?

    - by Vladimir Kojic
    Compiled query: public static class Machines { public static readonly Func<OperationalDataContext, short, Machine> QueryMachineById = CompiledQuery.Compile((OperationalDataContext db, short machineID) => db.Machines.Where(m => m.MachineID == machineID).SingleOrDefault() ); public static Machine GetMachineById(IUnitOfWork unitOfWork, short id) { Machine machine; // Old code (working) //var machineRepository = unitOfWork.GetRepository<Machine>(); //machine = machineRepository.Find(m => m.MachineID == id).SingleOrDefault(); // New code (making problems) machine = QueryMachineById(unitOfWork.DataContext, id); return machine; } It looks like compiled query is returning result from another data context [TestMethod] public void GetMachinesTest() { using (var unitOfWork = IoC.Get<IUnitOfWork>()) { // Compile Query var machine = Machines.GetMachineById(unitOfWork, 3); } using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // Get From Repository var machineFromRepository = machineRepository.Find(m => m.MachineID == 2).SingleOrDefault(); var machine = Machines.GetMachineById(unitOfWork, 2); VerifyHuskyHostMachine(machineFromRepository, 2, "Machine 2", "222222", "H400RS", "MachineIconB.xaml", false, true, LicenseType.Licensed, InterfaceType.HuskyHostV2, "10.0.97.2:8080", "10.0.97.2", 8080, "4.0"); VerifyHuskyHostMachine(machine, 2, "Machine 2", "222222", "H400RS", "MachineIconB.xaml", false, true, LicenseType.Licensed, InterfaceType.HuskyHostV2, "10.0.97.2:8080", "10.0.97.2", 8080, "4.0"); Assert.AreSame(machineFromRepository, machine); // FAIL } } If I run other (complex) unit tests I'm getting as expected: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. Another Important information is that this test is under TransactionScope! UPDATE: It looks like next link is describing similar problem (is this bug solved ?): http://social.msdn.microsoft.com/Forums/en-US/linqprojectgeneral/thread/9bcffc2d-794e-4c4a-9e3e-cdc89dad0e38

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >