Search Results

Search found 15129 results on 606 pages for 'orientation changes'.

Page 392/606 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • Can you use Github App with Beanstalk?

    - by mikemick
    Being new to Git, I wanted to use a GUI (Windows based) and preferred the Github App. However, I would like to integrate this site with a Beanstalkapp account. I'm pretty sure this is possible, but I can't figure it out. Inside of the Github app, I navigate to my repository. When I choose "Tools Settings...", I place the Git Clone URL for the repository provided by Beanstalk into the "Primary Remote (origin)" field in my Github app. Now when I click "Publish" (which says "Click to publish this branch to server" when I hover over it) it changes to "Publishing...". After a few seconds, I get this error: server failure The remote server disconnected. Try again later, or if this persists, contact [email protected] I am pretty sure I set the SSH keys up properly (never done this before). I added the key to both the Beanstalkapp and my Github web account.

    Read the article

  • MS Access (2010) Enable Design View

    - by Tim GONELLA
    I downloaded the Access template below for doing a home inventory: http://office.microsoft.com/en-us/templates/results.aspx?qu=home%20inventory&ex=1&queryid=0d245f2a%2Dacdc%2D4161%2D92c8%2D8ba16a52ab32&AxInstalled=1&c=0#ai:TC101918100| The design view is not visible, which is a bit of a nuisance. Things I've tried: 1) In options/options/current database/ the check boxes (enable layout view & enable design changes for tables in Datasheet view) are both greyed out. 2) I've unblocked the file using Right-Click-Properties. 3) I've tried copying/exporting the objects to another database. But can only copy/export the tables. 4) I've tried holding shift when opening the DB. 5) Enabling all trust permissions etc. None of these work Does anybody have any suggestions. (I'm using Office 2010) Thanks

    Read the article

  • umbraco front end site stopped working suddenly

    - by Srilakshmi
    Hi All, I created one webapplication and placed the default.aspx page in the root folder of the umbraco (i.e., httpdocs folder) and the application dll into the bin folder. I used the name “Default.aspx” as the other names are not working. Now the issue is all the pages are redirecting to the default.aspx page (I haven’t made any config changes anywhere in the umbraco setup) I found this root cause and removed the default.aspx page and its respective dll from the bin folder. The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /default.aspx I stuck up here and struggling to resolve it.Please help me out on this THanks, Srilakshmi

    Read the article

  • What signing method to use for public open-source projects?

    - by Irchi
    I'm publishing an open-source library on CodePlex, and want the dll files to have strong names so that they can be added to GAC. What's the best option for signing? Should I use SNK? If so, everyone have access to the key. I don't have a problem with everyone having access, but is it a good approach? Should I use PFX? If so, does it mean that other people downloading the source code are not able to build the solution? What I like to do is that I am the only one person to have access to the key, so that the signed assemblies also have a level of authenticity, but meanwhile don't prevent other developers to download, build, or change the source code for themselves, and be able to post changes for the main project.

    Read the article

  • SetCursor reverts after a mouse move

    - by Joe Ludwig
    I am using SetCursor to set the system cursor to my own image. The code looks something like this: // member on some class HCURSOR _cursor; // at init time _cursor = LoadCursorFromFile("somefilename.cur"); // in some function SetCursor(_cursor); When I do this the cursor does change, but on the first mouse move message it changes back to the default system arrow cursor. This is the only code in the project that is setting the cursor. What do I need to do to make the cursor stay the way I set it?

    Read the article

  • Change element class without using css animation

    - by Akshat
    I have a class with the following type of css animaton .cssanimation { -webkit-transition: 0.2s all ease-in-out -o-transition: 0.2s all ease-in-out -moz-transition: 0.2s all ease-in-out transition: 0.2s all ease-in-out .. some other changes in position } I have the div <div id="thediv"> ... </div> $('#thediv').addClass('cssanimation') //animates the object I do use this animation at some point but sometimes I'd like to add it without invoking the animation Does Jquery have a way in which I can add classes without invoking their css animations?

    Read the article

  • tsql proc logic help

    - by bacis09
    I am weak in SQL and need some help working through some logic with my proc. Three pieces: store procedure, table1, table2 Table 1 stores most recent data for specific IDs Customer_id status_dte status_cde app_dte 001 2010-04-19 Y 2010-04-19 Table 2 stores history of data for specific customer IDs: For example: Log_id customer_Id status_dte status_cde 01 001 2010-04-20 N 02 001 2010-04-19 Y 03 001 2010-04-19 N 04 001 2010-04-19 Y The stored proecure currently throws an error if the status date from table1 is < than app_date in table1. If @status_dte < app_date Error Note: @status_dte is a variable stored as the status_dte from table1 However, I want it to throw an error when the EARLIEST status_dte from table 2 with a status_cde of 'Y' is less than the app_dte column in table 1. Keep in mind that this earliest date is not stored anywhere, the history of data changes per customer. Another customer might have the following history. Log_id customer_Id status_dte status_cde 01 002 2010-04-20 N 02 002 2010-04-18 N 03 002 2010-04-19 Y 04 002 2010-04-19 Y Any ideas on how I can approach this?

    Read the article

  • Difference between Popup's IsOpen and Visibility properties?

    - by cfouche
    I've played around with the WPF Popup Control and as far as I can see, the Visibility property is superfluous. If you have a Popup with IsOpen = True, it will be visible even if its Visibility = Collapsed. If you have a Popup with IsOpen = False, then its Visibility will be Collapsed, and will remain "Collapsed" when IsOpen changes to true, and it will appear. (i.e. you'll have something that appears on your screen, even though Snoop says it is Collapsed.) Why does the Popup control have both these properties? Am I missing something here?

    Read the article

  • How to Change the url of the page when jquery ui tabs is clicked

    - by Aakash Chakravarthy
    Hello, I have jquery tabs list like <ul id="tabsList"> <li><a href="#tab-1">TAB 1</a></li> <li><a href="#tab-2">TAB 2</a></li> <li><a href="#tab-3">TAB 3</a></li> </ul> and contents for it like <div id="tab-1">...</div>, <div id="tab-2">...</div>, <div id="tab-3">...</div> When the tab is clicked, the tab changes correctly. But i want the id of the tabs be in the url. i.e when tab 2 is clicked, the URL should change to http://example.com/index.htm#tab-2 when tab 1 is clicked, the URL should change to http://example.com/index.htm#tab-1 How to do this ?

    Read the article

  • How to use a c# datagridview to update a database file just like Access does?

    - by mackeyka
    I have googled everywhere and I am finally giving up and asking here. I am working in Visual Studio 2010 with C#. I have set up a form with a datagridview connected to a MSSQL database and I need to save changes made in the datagridview back to the physical database. I am having some success but I think that I am going about some of it completely wrong because I can not get it to save consistently. What I really want is for the updates to work just like they do when working with Access. When I edit a row in the datagridview and then leave that row, either by selecting another row or by selecting some other control on the form or even by changing to another form or quitting the application the row should be automatically pdated to the physical database. The first part of this question I think is, what is the proper event to use to trigger the save and then second what methods should be used to actually write the data to the database?

    Read the article

  • Hitting a memory limit slows down the .Net application

    - by derdo
    We have a 64bit C#/.Net3.0 application that runs on a 64bit Windows server. From time to time the app can use large amount of memory which is available. In some instances the application stops allocating additional memory and slows down significantly (500+ times slower).When I check the memory from the task manager the amount of the memory used barely changes. The application keeps on running very slowly and never gives an out of memory exception. Any ideas? Let me know if more data is needed.

    Read the article

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • Must .aspx files have a page directive?

    - by Keith Bloom
    Around 90% of the pages for our websites have no .Net code embedded in them yet are published as .aspx files. I want these to render as fast as possible so I'm removing as much as I can. Does the .Net page directive have an impact on performance? I am thinking about two factors; the page speed for each GET and what happens when the file changes. The CMS system re-creates each page daily and I'm wondering if this triggers the ASP.Net compilation process.

    Read the article

  • Calling UITableViews delegate methods directly.

    - by RickiG
    Hi I was looking for a way to call the edit method directly. - (void)tableView:(UITableView *)theTableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath I have all my logic for animating manipulated cells, removing from my model array etc. in this method. It is getting called when a user swipes, adds or rearranges, but I would like to call it manually/directly as a background thread changes my model. I have constructed an NSIndexPath like so: NSIndexPath *path = [NSIndexPath indexPathForRow:i inSection:1]; I just can't figure out how to call something like: [self.tableview commitEditingStyle:UITableViewCellEditingStyleDelete forRowAtIndexPath:path]; Do I need to gain access to the methods of this plain style UITableView in another way? Thanks:)

    Read the article

  • Having encoding problems in Aptana Studio

    - by keune
    A few months ago, I was working on a PHP project in Aptana Studio. It was version 1.5 or something. Later I installed Aptana 2.0 and created a new project with the same files. Back then it was UTF-8 so I chose UTF-8 for the project's text file encoding. When I make changes in any PHP file using Aptana, it gives the error: Warning: Cannot modify header information - headers already sent... I know it's a problem related to encoding. What can I do?

    Read the article

  • Is it safe to convert varchar and char into nvarchar and nchar in SQL Server?

    - by Svish
    We currently have a number of columns in the database which are of type varchar. The application that uses them are in C# and it uses Linq2Sql for the communication (or what to call it). We would like to support unicode characters, which means we would have to convert the varchar columns into nvarchar. Is this a safe operation? Is it just a matter of changing the column type and updating the dbml file, or are there more stuff that needs to be done? Any changes in the C# code? Do I need to somehow convert the text that already exist in the database manually, or is it handled for me?

    Read the article

  • Password reset by email without a database table

    - by jpatokal
    The normal flow for resetting a user's password by mail is this: Generate a random string and store it in a database table Email string to user User clicks on link containing string String is validated against database; if it matches, user's pw is reset However, maintaining a table and expiring old strings etc seems like a bit of an unnecessary hassle. Are there any obvious flaws in this alternative approach? Generate a MD5 hash of the user's existing password Email hash string to user User clicks on link containing string String is validated by hashing existing pw again; if it matches, user's pw is reset Note that the user's password is already stored in a hashed and salted form, and I'm just hashing it once more to get a unique but repeatable string. And yes, there is one obvious "flaw": the reset link thus generated will not expire until the user changes their password (clicks the link). I don't really see why this would be a problem though -- if the mailbox is compromised, the user is screwed anyway.

    Read the article

  • How to make sure web services are kept stable from one release to the next?

    - by Tor Hovland
    The company where I work is a software vendor with a suite of applications. There are also a number of web services, and of course they have to be kept stable even if the applications change. We haven't always succeeded with this, and sometimes a customer finds that a service is not behaving as before after upgrading. We now want to handle this better. In general, web services shouldn't change, and if they have to, at least we will know about it and document the change. But how do we ensure this? One idea is to compare the WSDL files with the previous versions at every release. That will make sure the interfaces don't change, but it won't detect that the behavior changes, for example if a bug is introduced in some common library. Another idea is to build up a suite of service tests, for example using soapUI. But then we'll never know if we have covered enough cases. What are some best practices regarding this?

    Read the article

  • Can I programatically determine if a PNG is animated?

    - by alex
    I have PNG (as well as JPEG) images uploaded to my site. They should be static (i.e. one frame). There is such thing as APNG. (it will be animated in Firefox). According to the Wikipedia article... APNG hides the subsequent frames in PNG ancillary chunks in such a way that APNG-unaware applications would ignore them, but there are otherwise no changes to the format to allow software to distinguish between animated and non-animated images. Does this mean it is impossible to determine if a PNG is animated with code? If it is possible, can you please point me in the right direction PHP wise (GD, ImageMagick)?

    Read the article

  • Linux periodically "losing" ability to connect to server via SSH?

    - by gct
    I know this isn't exactly a programming question, but it popped up in my use of git for programming projects at least. I've got a web server that I use to host my git repos on, but my ubuntu box seems to "lose" the ability to connect to it via SSH. I'll get a "connection refused" error when I try to ssh or use git. Rebooting my local machine will fix the problem, but only temporarily. I can still connect to the web interface just fine, and the problem manifests with other servers as well. I've been working around it by pulling my changes over to my laptop and pushing from there, but that's sub-optimal as you can imagine. Has anyone seen something like this? I'd be tempted to say it's some kind of IP caching problem, but I can't connect even using the IP address of the server directly... Running Ubuntu 9.04

    Read the article

  • How do I best run a search on Date when it is not a :has_many association?

    - by Angela
    I have a number of activities that have a calculated scheduled date. The activities, for example, Email, have a email.days method which is the days from a Contact.start_date on which it should be sent. This means contact.start_date + email.days yields a date on which email is sent to contact. I would like to use link_to around the date, so I can see all the emails and associated contacts that are to be scheduled on that date. However, this "date" is not an attribute or an associate, so I'm not linking to a model's view. It's calculated. So: 1) What should the actual "format" of the date that gets passed in the URl be? What is the method to do the consistent conversion? 2) How do I (find) all instances, because this "date" is not an actual attribute, is it a calculated value which changes depending on the two associated models of Contact and Email. Thanks.

    Read the article

  • Replication - syncronizing most of the data some of the time

    - by uncle brad
    I have some data that isn't properly "partitioned" (for lack of a better word). All inserts, processing and reporting happen on the same table. The bulk of the processing happens not long after the insert and not long after that it becomes immutable (we're talking days). I could do all inserts and processing on a new table that I replicate to the old table. When I detect that the data has become immutable I would delete the data from the new table, but I would edit the delete replication stored procedure so that the delete did not replicate. How bad an idea is this? It seems attractive at the moment (I haven't slept on it yet) because it might mitigate a performance problem with only very small changes to the application. It also seems like it might be a good way to shoot myself in the foot.

    Read the article

  • how to get the camera data

    - by beof
    Hello ,guys, My app needs to get the camera data from Iphone. In my ImagePickerController, there is overlayView drawing realtime indications. I use UIGetScreenImage() to get the screenshot, and I also dump overlayview to image, then I can restore the original Image based on these two images. if the overlayView is still, it works quite well, but if the overlayView keeps changing, UIGetScreenImage() can not keep up with it. For example,if the overlayView changes from a rectangle to a circle, then calling UIGetScreenImage() returns with a rectangle on top of it. Is there a way to get the realtime camera data? I really appreciate if someone could help.

    Read the article

  • .Remove(object) on a List<T> returned from LINQ to SQL compiled query won't delete the Object right

    - by soldieraman
    I am returning two lists from the database using LINQ to SQL compiled query. While looping the first list I remove duplicates from the second list as I dont want to process already existing objects again. eg. //oldCustomers is a List returned by my Compiled Linq to SQL Statmenet that I have added a .ToList() at the end to //Same goes for newCustomers for (Customer oC in oldCustomers) { //Do some processing newCustomers.Remove(newCusomters.Find(nC=> nC.CustomerID == oC.CusomterID)); } for (Cusomter nC in newCustomers) { //Do some processing } DataContext.SubmitChanges() I expect this to only save the changes that have been made to the customers in my processing and not Remove or Delete any of my customers from the database. Correct? I have tried it and it works fine - but I am trying to know if there is any rare case it might actually get removed

    Read the article

  • Reference non-GAC version of DLL in Visual Studio 2010

    - by Eric J.
    This is similar to Add Non-GAC reference to project but the solutions presented there don't seem to help. I have a WinForms UI Library (Krypton from ComponentFactory) installed in the GAC. There's a bug I want to track down in that library, so I added the source code to my solution, removed the old references from my WinForms project to Krypton DLLs, added them back as a project references, ensured Copy Local is set to true, double-checked that the path (on reference properties tab) points to my local project, and... ...the GAC version is still being used while debugging. I cannot set a breakpoint in the Krypton source, Debugger.Break() or other code changes to not execute, and when I start the Visual Studio 2010 debugger, I see a Loading from ... GAC_MISL message relating to the Krypton DLLs flash by in the VS 2010 status bar. The DLLs are not copied to the WinForm's Debug folder. How can I reference the "project" version of the files while debugging while leaving them registered in the GAC?

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >