Search Results

Search found 102772 results on 4111 pages for 'sql server 2008'.

Page 465/4111 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • I can't apply prefast /StackHogThreshold option.

    - by Benjamin
    I'm using ddkbuild for building my driver. And I use prefast also. Prefast has /StackHogThreshold option. The default value is 1024 Bytes. But I can't modify the value. It's my input for 'Rebuild All Command Line' C:\Windows\System32\cmd.exe /k "$(SolutionDir)....\ddkbuild.bat" -WIN7WLHA64 -prefast free $(ProjectDir) -cZ

    Read the article

  • Does normalization really hurt performance in high traffic sites?

    - by Luke101
    I am designing a database and I would like to normalize the database. I one query I will joining about 30-40 tables. Will this hurt the website performance if it ever becomes extremely popular? This will be the main query and it will be getting called 50% of the time. The other queries I will be joining about 2 tables. I have a choice right now to normalize or not to normalize but if the normalization becomes a problem in the future i may have to rewrite 40% of the software and it may take me a long time. Does normalization really hurt in this case? Should I denormalize now while I have the time?

    Read the article

  • Why must use "out" instead of ref ?

    - by Phsika
    i wrote some code blocks about ref -out declaration. i think that ref is most useful out. Ok. why i need to use out. i can use always ref everytime: namespace out_ref { class Program { static void Main(string[] args) { sinifA sinif = new sinifA(); int test = 100; sinif.MethodA(out test); Console.WriteLine(test.ToString()); sinif.MethodB(ref test); Console.WriteLine(test.ToString()); Console.ReadKey(); } } class sinifA { public void MethodA(out int a) { a = 200; } int _b; public void MethodB(ref int b) { _b = b; b = 2*b; } } }

    Read the article

  • How to use Transaction in Entity FrameWork?

    - by programmerist
    How to use Transaction in Entity FrameWork? i read some links on Stackoverflow : http://stackoverflow.com/questions/815586/entity-framework-using-transactions-or-savechangesfalse-and-acceptallchanges BUT; i have 3 table so i have 3 entities: CREATE TABLE Personel (PersonelID integer PRIMARY KEY identity not null, Ad varchar(30), Soyad varchar(30), Meslek varchar(100), DogumTarihi datetime, DogumYeri nvarchar(100), PirimToplami float); Go create TABLE Prim (PrimID integer PRIMARY KEY identity not null, PersonelID integer Foreign KEY references Personel(PersonelID), SatisTutari int, Prim float, SatisTarihi Datetime); Go CREATE TABLE Finans (ID integer PRIMARY KEY identity not null, Tutar float); Personel, Prim,Finans my tables. if you look Prim table you can see Prim value float value if i write a textbox not float value my transaction must run. using (TestEntities testCtx = new TestEntities()) { using (TransactionScope scope = new TransactionScope()) { // do someyihng... testCtx.Personel.SaveChanges(); // do someyihng... testCtx.Prim.SaveChanges(); // do someyihng... testCtx.Finans.SaveChanges(); scope .Complete(); success = true; } } How can i do that?

    Read the article

  • Dynamic creation of VS Project

    - by Adkins
    I have a project where I create WiX (Windows Installer for XML) files, when they are not already present. It is working perfectly. Now I want to expand it to add more functionality. I was wondering if there is a way to create a Visual Studio project programmatically? This project is run as part of our nightly build process, and when a new wix file is needed it is created, but I want to have everything in place when the build is finished so if necessary you can just open the project in Visual Studio and start editing. Am I dreaming outside the realm of possibility or no? Any nudge in the right direction will be greatly appreciated.

    Read the article

  • Why are there performance differences when a SQL function is called from .Net app vs when the same c

    - by Dan Snell
    We are having a problem in our test and dev environments with a function that runs quite slowly at times when called from an .Net Application. When we call this function directly from management studio it works fine. Here are the differences when they are profiled: From the Application: CPU: 906 Reads: 61853 Writes: 0 Duration: 926 From SSMS: CPU: 15 Reads: 11243 Writes: 0 Duration: 31 Now we have determined that when we recompile the function the performance returns to what we are expecting and the performance profile when run from the application matches that of what we get when we run it from SSMS. It will start slowing down again at what appear to random intervals. We have not seen this in prod but they may be in part because everything is recompiled there on a weekly basis. So what might cause this sort of behavior?

    Read the article

  • SSRS Parameters Displaying incorrectly

    - by Ryan
    Basically, I have a datetime parameter, after picking a date with the calendar widget, the date will display correctly (12/1/2010 or 1-DEC-2010). If the parameter tab refreshes in any way, either from report processing or changing another of the parameters, the date flips the month and day (1-DEC-2010 becomes 12-Jan-2010, or 12/1/2010 becomes 1/12/2010). I'm utilizing the SSRS plugin for C# (Microsoft.Reporting.Winforms.ReportViewer). Has anyone seen anything like this?

    Read the article

  • Loading xml with encoding UTF 16 using XDocument

    - by Sangram
    Hi, I am trying to read the xml document using XDocument method . but i am getting an error when xml has <?xml version="1.0" encoding="utf-16"?> When i removed encoding manually.It works perfectly. I am getting error " There is no Unicode byte order mark. Cannot switch to Unicode. " i tried searching and i landed up here-- Why does C# XmlDocument.LoadXml(string) fail when an XML header is included? But could not solve my problem. My code : XDocument xdoc = XDocument.Load(path); Any suggestions ?? thank you.

    Read the article

  • Looking for MSSQL Table Design Sanity Check for Profile Tables with Dynamic Columns.

    - by Code Sherpa
    I just want a general sanity check regarding database design. We are building a web system that has both Teachers and Students. Both have accounts in the system. Both have profiles in the system. My question is about the table design of those Profile tables. The Teacher profile is pretty static regarding the metadata associated with it. Each teacher has a set number of fields that exposes information about that individual (schools, degrees, etc). The students, however, are a different case. We are using a windows service to pull varying data about the students from an endless stream of excel spreadsheets. The data gets moved into our database and then the fields appear in association with the student's profile. Accordingly, each and every student may have very different fields in their profile. I originally started with the concept of three tables: Accounts ---------- AccountID TeacherProfiles ---------- TeacherProfileID AccountID SecondarySchool University YearsTeaching Etc... StudentProfiles ---------- StudentProfileID AccountID Header Value The StudentProfiles table would hold the name of the column headers from the excel spreadsheets and the associated values. I have since evolved the design a little to treat Profiles more generically per the attached ERD image. The Teacher and Student "Headers" are stored in a table called "ProfileAttributeTypes" and responses (either from the excel document or via input fields on the web form) are put in a ProfileAttributes table. This way both Student and Teacher profiles can be associated with a dynamic flow of profile fields. The "Permissions" table tells us whether we are dealing with a Student or a Teacher. Since this system is likely to grow quickly, I want to make sure the foundation is solid. Can you please provide feedback about this design and let me know if it seems sound or if you could see problems it might create and, if so, what might be a better approach? Thanks in advance.

    Read the article

  • SQL Why is prefixing column names considered bad practice?

    - by P.Brian.Mackey
    According to a popular SO post is it considered a bad practice to prefix table names. At my company every column is prefixed by a table name. This is difficult for me to read. I'm not sure the reason, but this naming is actually the company standard. I can't stand the naming convention, but I have no documentation to back up my reasoning. All I know is that reading AdventureWorks is much simpler. In this our company DB you will see a table, Person and it might have column name: Person_First_Name or maybe even Person_Person_First_Name (don't ask me why you see person 2x) Why is it considered a bad practice to pre-fix column names? Are underscores considered evil in SQL as well? Note: I own Pro SQL Server 2008 - Relation Database design and implementation. References to that book are welcome.

    Read the article

  • Association end is not mapped in ADO entity framework

    - by Sean
    I am just starting out with ADO.net Entity Framework I have mapped two tables together and receive the following error: Error 1 Error 11010: Association End 'OperatorAccess' is not mapped. E:\Visual Studio\projects\Brandi II\Brandi II\Hospitals.edmx 390 11 Brandi II Not sure what it is I am doing wrong

    Read the article

  • How to Auto-Increment Non-Primary Key? - SQL Server

    - by user311509
    CREATE TABLE SupplierQuote ( supplierQuoteID int identity (3504,2) CONSTRAINT supquoteid_pk PRIMARY KEY, PONumber int identity (9553,20) NOT NULL . . . CONSTRAINT ponumber_uq UNIQUE(PONumber) ); The above ddl produces an error: Msg 2744, Level 16, State 2, Line 1 Multiple identity columns specified for table 'SupplierQuote'. Only one identity column per table is allowed. How can i solve it? I want PONumber to be auto-incremented.

    Read the article

  • choose append to existing backup instead of overwrite

    - by aron
    Hello, I have a database and I made it's first backup 2 days ago. Then yesterday I spent an entire adding new records. This morning I ran a backup, (but I selected append to existing backup set) as pictured below. I just ran a restore and I found that it wiped out all my data from yesterday and it restored it from the backup of 2 days ago. Not the version from this mornings backup. I zipped this backup file to be safe. I changed some data in the DB, Then I ran the back up again, but this time I selected "overwrite all existing backup sets" Now when I restore the db it's seems to restore the data from the backup correctly. I think I learned a lesson here, correctly if I'm wrong My questions is, Did I lose an entire day of work? I still have this morning's backup .bak file safe in a zip. Is there anyway I can restore is with the right data?

    Read the article

  • How to design parts of the application in XAML and how to reusing it then?

    - by MartyIX
    I'm working on a main window in my application and I would like to design parts of my window separately in Visual Studio designer. Main window Game desk (actually more of them and therefore it would be nice to design the game desk, mark it as a resource and then just via simple code (something like creating a new object and setting DataContext) create it. Console And so on Is it possible in VS to do this thing? I just need to know what to look for if it is possible. I don't need a whole solution. Thank you for suggestions!

    Read the article

  • Change right-click context menu options in VS2008

    - by Mark Ursino
    When I right-click in my class library, I get some quick options to create things, like an Item from the popup list (New Item...), a User Control, etc. E.g. Now in my web app project, I'd like to be able to get the User Control listed in the right-click menu just like Component and Class, so I don't have to click New Item... then choose it from there. Is there a way to do this in the configuration? I can't seem to figure it out in VS.

    Read the article

  • How to find where program crashed

    - by Mick
    I have a program that crashes (attempting to read a bad memory address) while running the "release" version but does not report any problems while running the "debug" version in the visual studio debugger. When the program crashes the OS asks if I'd like to open up the debugger, and if I say yes then I see an arrow pointing to where I am in a listing of some assembler which I am not skilled enough to read properly (I learned 6502 assembler 30 years ago). Is there any way for my to determine where in my sourcecode the offending memory read was located?

    Read the article

  • Can TFS workspaces be used without being tied to a specific machine?

    - by GWLlosa
    So I've got a situation where we have a project with 10 developers. Each developer, when they come in for the day, is randomly issued a machine to use for development that day. The machine names are different, say DEV01 - DEV10. At the time that they are issued to the developers, the machines are identical, and no changes the developers make during the day are persisted on the machines (source code changes are stored in TFS, not locally). These are of course actually virtual machines, but that's not really relevant to the point at hand. The problem is that each morning, the developers run into 3 issues: 1) The machine that they are assigned may not be the same machine they were last assigned to. For example, DevMan A might have used DEV04 yesterday, and received DEV06 today. His workspace definitions are now tied to DEV06; he must create a new workspace, or migrate the old workspace to DEV04. 2) The machine that they are assigned may have been in use yesterday, and some of the mappings may conflict. For example, DevMan A might have DEV04 today, and wish to create a workspace mapping the project folder to "C:\MyProj\Solution". However, DevMan B had DEV04 yesterday, and he used the same project folder. TFS now complains. 3) This may be the first time they are on a given machine. They now need to recreate for this machine all of their source-control mappings for the new machine. All of these issues can be resolved in a straightforward fashion on a case-by-case basis, but it does sap some productivity from the morning. We'd much prefer if the TFS workspace definitions could be 'relaxed', such that they did not include the machine name in the definition somehow. Barring that, if anyone is aware of a solution to the above problems that can run automatically, or with limited user intervention, that would also be ideal.

    Read the article

  • Is possible to generate constant value during compilation?

    - by AOI Karasu
    I would like my classes to be identified each type by an unique hash code. But I don't want these hashed to be generated every time a method, eg. int GetHashCode(), is invoked during runtime. I'd like to use already generated constants and I was hoping there is a way to make the compiler do some come computing and set these constants. Can it be done using templates? Could you give me some example, if it is possible.

    Read the article

  • After calling a COM-dll component, C# exceptions are not caught by the debugger

    - by shlomil
    I'm using a COM dll provided to me by 3rd-party software company (I don't have the source code). I do know for sure they used Java to implement it because their objects contain property names like 'JvmVersion'. After I instantiated an object introduced by the provided COM dll, all exceptions in my C# program cannot be caught by the VS debugger and every time an exception occurs I get the default Windows Debugger Selection dialog (And that's while executing my program in debug mode under a full VisualStudio debugging environment). To illustrate: throw new Exception("exception 1"); m_moo = new moo(); // Component taken from the COM-dll throw new Exception("exception 2"); Exception 1 will be caught by VS and show the "yellow exception window". Exception 2 will open a dialog titled "Visual Studio Just-In-Time Debugger" containing the text "An unhandled win32 exception occurred in myfile.vshost.exe[1348]." followed by a list of the existing VS instances on my system to select from. I guess the instantiation of "moo" object overrides C#'s exception handler or something like that. Am I correct and is there a way to preserve C#'s exception handler?

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >