Search Results

Search found 231 results on 10 pages for 'detach'.

Page 8/10 | < Previous Page | 4 5 6 7 8 9 10  | Next Page >

  • Cocos2D Director Pause/Resume Issue

    - by Aditya
    I am trying to play a .gif animation in cocos2D. For this i am using the library glgif. Now, to display the animation i am pausing the Director, adding a subview to show the animation and after the animation is done i am resuming the Director. However, I am not able to resume the state of the Director and it shows blank. So I tried this without pausing and resuming this Director and it still did not work.I also tried detaching the director before tha animation and adding it back afterwards and even that did not work. So is there a way to pause/suspend the Director in the application and properly restore is back? Thanks. Code sample: [[Director sharedDirector] pause]; [[Director sharedDirector] detach]; AppDelegate *del = [[UIApplication sharedApplication] delegate]; [del.window addSubview:del.viewController.view]; [del.window makeKeyAndVisible]; // this is code to call glgif class and start anim. //code to resume the director AppDelegate *del = [[UIApplication sharedApplication] delegate]; [[Director sharedDirector] resume]; [[Director sharedDirector] attachInView:del.window]; MScene *m = [MScene node]; [[Director sharedDirector] replaceScene:m];

    Read the article

  • How to force two process to run on the same CPU?

    - by kovan
    Context: I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory. Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive. After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes. Question: Now, the question is, is there any way to force pairs of processes to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, altough that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.

    Read the article

  • Attached event triggering if event is same as current event

    - by Psytronic
    I have a button which start/stops a timer. When I click it, I remove the current event handler, and attach the opposite one. Ie, click "Stop", and the "Start" function is attached for the next click. However in IE (7), not sure about 8. The newly attached event is also triggering after the original function is finished: Lifeline in IE7: Button pushed -- "Stop" function begins -- "Stop" function removed -- "Start" function attached -- "Stop" function finishes -- "Start" function is called. I know it's to do with the attaching event bit, because If I remove it, the Start event doesn't fire. This seems a bit odd, as I have the same situation in the "Start" function, ie, "Start" removed, "Stop" called. But this doesn't seem to cause an infinite loop, thankfully. if(btn.attachEvent){ btn.detachEvent("onclick", stop); btn.attachEvent("onclick", start); }else if(btn.addEventListener){ btn.removeEventListener("click", stop, true); btn.addEventListener("click", start , true); } It all works fine in FF and Chrome. I have switched the order of attach/detach, just to see if it makes a difference, but it doesn't. Addendum Sorry, and no answers involving jQuery, Protoype et al. Its not something I want to integrate for this project.

    Read the article

  • EF 4.0 : Save Changes Retry Logic

    - by BGR
    Hi, I would like to implement an application wide retry system for all entity SaveChanges method calls. Technologies: Entity framework 4.0 .Net 4.0 namespace Sample.Data.Store.Entities { public partial class StoreDB { public override int SaveChanges(System.Data.Objects.SaveOptions options) { for (Int32 attempt = 1; ; ) { try { return base.SaveChanges(options); } catch (SqlException sqlException) { // Increment Trys attempt++; // Find Maximum Trys Int32 maxRetryCount = 5; // Throw Error if we have reach the maximum number of retries if (attempt == maxRetryCount) throw; // Determine if we should retry or abort. if (!RetryLitmus(sqlException)) throw; else Thread.Sleep(ConnectionRetryWaitSeconds(attempt)); } } } static Int32 ConnectionRetryWaitSeconds(Int32 attempt) { Int32 connectionRetryWaitSeconds = 2000; // Backoff Throttling connectionRetryWaitSeconds = connectionRetryWaitSeconds * (Int32)Math.Pow(2, attempt); return (connectionRetryWaitSeconds); } /// <summary> /// Determine from the exception if the execution /// of the connection should Be attempted again /// </summary> /// <param name="exception">Generic Exception</param> /// <returns>True if a a retry is needed, false if not</returns> static Boolean RetryLitmus(SqlException sqlException) { switch (sqlException.Number) { // The service has encountered an error // processing your request. Please try again. // Error code %d. case 40197: // The service is currently busy. Retry // the request after 10 seconds. Code: %d. case 40501: //A transport-level error has occurred when // receiving results from the server. (provider: // TCP Provider, error: 0 - An established connection // was aborted by the software in your host machine.) case 10053: return (true); } return (false); } } } The problem: How can I run the StoreDB.SaveChanges to retry on a new DB context after an error occured? Something simular to Detach/Attach might come in handy. Thanks in advance! Bart

    Read the article

  • Using a .MDF SQL Server Database with ASP.NET Versus Using SQL Server

    - by Maxim Z.
    I'm currently writing a website in ASP.NET MVC, and my database (which doesn't have any data in it yet, it only has the correct tables) uses SQL Server 2008, which I have installed on my development machine. I connect to the database out of my application by using the Server Explorer, followed by LINQ to SQL mapping. Once I finish developing the site, I will move it over to my hosting service, which is a virtual hosting plan. I'm concerned about whether using the SQL Server setup that is currently working on my development machine will be hard to do on the production server, as I'll have to import all the database tables through the hosting control panel. I've noticed that it is possible to create a SQL Server database from inside Visual Studio. It is then stored in the App_Data directory. My questions are the following: Does it make sense to move my SQL Server DB out of SQL Server and into the App_Data directory as an .mdf file? If so, how can I move it? I believe this is called the Detach command, is it not? Are there any performance/security issues that can occur with a .mdf file like this? Would my intended setup work OK with a typical virtual hosting plan? I'm hoping that the .mdf database won't count against the limited number of SQL Server databases that can be created with my plan. I hope this question isn't too broad. Thanks in advance! Note: I'm just starting out with ASP.NET MVC and all this, so I might be completely misunderstanding how this is supposed to work.

    Read the article

  • jQuery performance

    - by jAndy
    Hi Folks, imagine you have to do DOM manipulation like a lot (in my case, it's kind of a dynamic list). Look at this example: var $buffer = $('<ul/>', { 'class': '.custom-example', 'css': { 'position': 'absolute', 'top': '500px' } }); $.each(pages[pindex], function(i, v){ $buffer.append(v); }); $buffer.insertAfter($root); "pages" is an array which holds LI elements as jQuery object. "$root" is an UL element What happens after this code is, both UL's are animated (scrolling) and finally, within the callback of animate this code is executed: $root.detach(); $root = $buffer; $root.css('top', '0px'); $buffer = null; This works very well, the only thing I'm pi**ed off is the performance. I do cache all DOM elements I'm laying a hand on. Without looking too deep into jQuery's source code, is there a chance that my performance issues are located there? Does jQuery use DocumentFragments to append things? If you create a new DOM element with var new = $('<div/>') it is only stored in memory at this point isnt it?

    Read the article

  • sql exception when transferring project from usb to c:\

    - by jello
    I'm working on a C# windows program with Visual Studio 2008. Usually, I work from school, directly on my usb drive. But when I copy the folder on my hard drive at home, an sql exception is unhandled whenever I try to write to the database. it is unhandled at the conn.Open(); line. here's the exception unhandled Database 'L:\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' already exists. Choose a different database name. Cannot attach the file 'C:\Documents and Settings\Administrator\My Documents\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' as database 'PatientMonitoringDatabase'. it's weird, because my connection string says |DataDirectory|, so it should work on any drive... here's my connection string: string connStr = "Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\PatientMonitoringDatabase.mdf; " + "Initial Catalog=PatientMonitoringDatabase; " + "Integrated Security=True"; Someone told me to: Connect to localhost with SQL Server Management Studio Express, and remove/detach the existing PatientMonitoringDatabase database. Whether it's a persistent database or only active within a running application, you can't have 2 databases with the same name at the same time attached to a SQL Server instance. So I did that, and now it gives me: Directory lookup for the file "C:\Documents and Settings\Administrator\My Documents\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf" failed with the operating system error 5(Access is denied.). Cannot attach the file 'C:\Documents and Settings\Administrator\My Documents\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' as database 'PatientMonitoringDatabase' I checked the files' properties, and I have allow for everyone. Does anyone know what's going on here?

    Read the article

  • NHibernate Session per Call in WCF - How to Rollback

    - by Corey Coogan
    I've implemented some components to use WCF with both an IoC Container (StructureMap) and the Session per Call pattern. The NHibernate stuff is most taken from here: http://realfiction.net/Content/Entry/133. It seems to be OK, but I want to open a transaction with each call and commit at the end, rather than just Flush() which how its being done in the article. Here's where I am running into some problems and could use some advice. I haven't figured out a good way to rollback. I realize I can check the CommunicationState and if there's an exception, rollback, like so: public void Detach(InstanceContext owner) { if (Session != null) { try { if(owner.State == CommunicationState.Faulted) RollbackTransaction(); else CommitTransaction(); } finally { Session.Dispose(); } } } void CommitTransaction() { if(Session.Transaction != null && Session.Transaction.IsActive) Session.Transaction.Commit(); } void RollbackTransaction() { if (Session.Transaction != null && Session.Transaction.IsActive) Session.Transaction.Rollback(); } However, I almost never return a faulted state from a service call. I would typically handle the exception and return an appropriate indicator on my response object and rollback the transaction myself. The only way I can think of handling this would be to inject not only repositories into my WCF services, but also an ISession so I can rollback and handle the way I want. That doesn't sit well with me and seems kind of leaky. Anyone else handling the same problem?

    Read the article

  • C# - Removing event handlers - FormClosing event or Dispose() method

    - by Andy
    Suppose I have a form opened via the .ShowDialog() method. At some point I attach some event handlers to some controls on the form. e.g. // Attach radio button event handlers. this.rbLevel1.Click += new EventHandler(this.RadioButton_CheckedChanged); this.rbLevel2.Click += new EventHandler(this.RadioButton_CheckedChanged); this.rbLevel3.Click += new EventHandler(this.RadioButton_CheckedChanged); When the form closes, I need to remove these handlers, right? At present, I am doing this when the FormClosing event is fired. e.g. private void Foo_FormClosing(object sender, FormClosingEventArgs e) { // Detach radio button event handlers. this.rbLevel1.Click -= new EventHandler(this.RadioButton_CheckedChanged); this.rbLevel2.Click -= new EventHandler(this.RadioButton_CheckedChanged); this.rbLevel3.Click -= new EventHandler(this.RadioButton_CheckedChanged); } However, I have seen some examples where handlers are removed in the Dispose() method. Is there a 'best-practice' way of doing this? (Using C#, Winforms, .NET 2.0) Thanks.

    Read the article

  • Calling QAxWidget method outside of the GUI thread

    - by user304361
    I'm beginning to wonder if this is impossible, but I thought I'd ask in case there's a clever way to get around the problems I'm having. I have a Qt application that uses an ActiveX control. The control is held by a QAxWidget, and the QAxWidget itself is contained within another QWidget (I needed to add additional signals/slots to the widget, and I couldn't just subclass QAxWidget because the class doesn't permit that). When I need to interact with the ActiveX control, I call a method of the QWidget, which in turn calls the dynamicCall method of the QAxWidget in order to invoke the appropriate method of the ActiveX control. All of that is working fine. However, one method of the ActiveX control takes several seconds to return. When I call this method, my entire GUI locks up for a few seconds until the method returns. This is undesirable. I'd like the ActiveX control to go off and do its processing by itself and come back to me when it's done without locking up the Qt GUI. I've tried a few things without success: Creating a new QThread and calling QAxWidget::dynamicCall from the new thread Connecting a signal to the appropriate slot method of the QAxWidget and calling the method using signals/slots instead of using dynamicCall Calling QAxWidget::dynamicCall using QtConcurrent::run Nothing seems to affect the behavior. No matter how or where I use dynamicCall (or trigger the appropriate slot of the QAxWidget), the GUI locks until the ActiveX control completes its operation. Is there any way to detach this ActiveX processing from the Qt GUI thread so that the GUI doesn't lock up while the ActiveX control is running a method? Is there something clever I can do with QAxBase or QAxObject to get my desired results?

    Read the article

  • How to resize font on the GUI buttons in MFC

    - by ame
    I have a GUI written in MFC for a Windows CE device. However I need to resize some of the buttons and their corresponding text. I can't figure out how to change font size. The following code fragments did not help: Trial 1: *CFont fnt2; fnt2.CreateFont(10, 0, 0, 0, FW_NORMAL, FALSE, FALSE, FALSE, ANSI_CHARSET, OUT_DEFAULT_PRECIS, CLIP_DEFAULT_PRECIS, DEFAULT_QUALITY, DEFAULT_PITCH, L"MS Shell Dlg"); m_btnForceAnalog.SetFont(&fnt2); fnt2.Detach(); Trial 2: LOGFONT lf; memset(&lf,0,sizeof(LOGFONT)); lf.lfHeight = 5; // Request a 100-pixel-height font // DP and LP are always the same on CE - The conversion below is used by CFont::CreateFontIndirect HDC hDC=::GetDC(NULL); lf.lfHeight = ::GetDeviceCaps(hDC,LOGPIXELSY) * lf.lfHeight; ::ReleaseDC(NULL,hDC); //ReleaseDC(/NULL,/hDC); lf.lfHeight /= 720; // 72 points/inch, 10 decipoints/point if(lf.lfHeight 0) lf.lfHeight *= -1; OutputDebugString(L"\nAbout to call the setfont\n"); lstrcpy(lf.lfFaceName, _T("Arial")); HFONT font =::CreateFontIndirectW(&lf); CWnd* myButton = GetDlgItem(IDC_FORCE_ANALOG_BTN); //The Button with regular font myButton-SendMessageW(WM_SETFONT, (WPARAM)font, TRUE); Thankyou!

    Read the article

  • Ruby script/console and Ruby script/server using two different DBs?

    - by aronchick
    Has anyone seen where script/console and script/server load two different databases (though both report using the same)? Here's the first output $ script/server => Booting WEBrick => Rails 2.3.5 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2010-03-21 15:54:05] INFO WEBrick 1.3.1 [2010-03-21 15:54:05] INFO ruby 1.8.7 (2010-01-10) [i386-mingw32] [2010-03-21 15:54:05] INFO WEBrick::HTTPServer#start: pid=7148 port=3000 No errors. I then run my standard code for entering a form - no problems. Checking the Dev Database (.yml at bottom): mysql> select * from books; [...] | 712 | Book | Book Name | 2010-03-21 22:29:22 | 2010-03-21 22:29:22 | [...] 712 rows in set (0.00 sec) The code CLEARLY saved it seconds ago And now here's the output of script/console: $ script/console Loading development environment (Rails 2.3.5) >> Books.all => [] Nothing. Further, upon further inspection, it's using the production database, but I can't figure out why. Any thoughts here? All consoles have been closed and reopened. UPDATE: Requested .yml file (can't see how it'd be helpful (user name and password are all the same for each)) - development: adapter: mysql database: BooksDBdev username: <user name> password: <long string> timeout: 5000 # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: mysql database: BooksDBtest username: <user name> password: <long string> timeout: 5000 production: adapter: mysql database: BooksDB username: <user name> password: <long string> timeout: 5000

    Read the article

  • How do I launch background jobs w/ paramiko?

    - by sophacles
    Here is my scenario: I am trying to automate some tasks using Paramiko. The tasks need to be started in this order (using the notation (host, task)): (A, 1), (B, 2), (C, 2), (A,3), (B,3) -- essentially starting servers and clients for some testing in the correct order. Further, because in the tests networking may get mucked up, and because I need some of the output from the tests, I would like to just redirect output to a file. In similar scenarios the common response is to use 'screen -m -d' or to use 'nohup'. However with paramiko's exec_cmd, nohup doesn't actually exit. Using: bash -c -l nohup test_cmd & doesnt work either, exec_cmd still blocks to process end. In the screen case, output redirection doesn't work very well, (actually, doesnt work at all the best I can figure out). So, after all that explanation, my question is: is there an easy elegant way to detach processes and capture output in such a way as to end paramiko's exec_cmd blocking? Update The dtach command works nicely for this!

    Read the article

  • c++ d3d hooking - COM vtable

    - by Mango
    Trying to make a Fraps type program. See comment for where it fails. #include "precompiled.h" typedef IDirect3D9* (STDMETHODCALLTYPE* Direct3DCreate9_t)(UINT SDKVersion); Direct3DCreate9_t RealDirect3DCreate9 = NULL; typedef HRESULT (STDMETHODCALLTYPE* CreateDevice_t)(UINT Adapter, D3DDEVTYPE DeviceType, HWND hFocusWindow, DWORD BehaviorFlags, D3DPRESENT_PARAMETERS* pPresentationParameters, IDirect3DDevice9** ppReturnedDeviceInterface); CreateDevice_t RealD3D9CreateDevice = NULL; HRESULT STDMETHODCALLTYPE HookedD3D9CreateDevice(UINT Adapter, D3DDEVTYPE DeviceType, HWND hFocusWindow, DWORD BehaviorFlags, D3DPRESENT_PARAMETERS* pPresentationParameters, IDirect3DDevice9** ppReturnedDeviceInterface) { // this call makes it jump to HookedDirect3DCreate9 and crashes. i'm doing something wrong HRESULT ret = RealD3D9CreateDevice(Adapter, DeviceType, hFocusWindow, BehaviorFlags, pPresentationParameters, ppReturnedDeviceInterface); return ret; } IDirect3D9* STDMETHODCALLTYPE HookedDirect3DCreate9(UINT SDKVersion) { MessageBox(0, L"Creating d3d", L"", 0); IDirect3D9* d3d = RealDirect3DCreate9(SDKVersion); UINT_PTR* pVTable = (UINT_PTR*)(*((UINT_PTR*)d3d)); RealD3D9CreateDevice = (CreateDevice_t)pVTable[16]; DetourTransactionBegin(); DetourUpdateThread(GetCurrentThread()); DetourAttach(&(PVOID&)RealD3D9CreateDevice, HookedD3D9CreateDevice); if (DetourTransactionCommit() != ERROR_SUCCESS) { MessageBox(0, L"failed to create createdev hook", L"", 0); } return d3d; } bool APIENTRY DllMain(HINSTANCE hModule, DWORD fdwReason, LPVOID lpReserved) { if (fdwReason == DLL_PROCESS_ATTACH) { MessageBox(0, L"", L"", 0); RealDirect3DCreate9 = (Direct3DCreate9_t)GetProcAddress(GetModuleHandle(L"d3d9.dll"), "Direct3DCreate9"); DetourTransactionBegin(); DetourUpdateThread(GetCurrentThread()); DetourAttach(&(PVOID&)RealDirect3DCreate9, HookedDirect3DCreate9); DetourTransactionCommit(); } // TODO detach hooks return true; }

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • Ajax heavy JS apps using excessive amounts of memory over time.

    - by Shane Reustle
    I seem to have some pretty large memory leaks in an app that I am working on. The app itself is not very complex. Every 15 seconds, the page requests approx 40kb of JSON from the server, and draws a table on the page using it. It is cheaper to draw the table over because the data is usually always new. I am attaching a few events to the table, approx 5 per line, 30 lines in the table. I used jQuery's .html() method to put the new html into the container and overwrite the existing. I do this specifically so that jQuery's special cleanup functions go in and attempt to detach all events on the elements in the element that it is overwriting. I then also delete the large variables of html once they are sent to the DOM using delete my_var. I have checked for circular references and attached events that are never cleared a few times, but never REALLY dug into it. I was wondering if someone could give me a few pointers on how to optimize a very heavy app like this. I just picked up "High Performance Javascript" by Nicholas Zakas, but didn't have much time to get into it yet. To give an idea on how much memory this is using, after 4~ hours, it is using about 420,000k on chrome, and much more on Firefox or IE. Thanks!

    Read the article

  • how to load image from file using MFC

    - by Sweety Khan
    **my browse button code is void CFileOpenDlg::OnBnClickedButton1() { // TODO: Add your control notification handler code here CFileDialog dlg(TRUE); int result=dlg.DoModal(); if(result==IDOK) { path=dlg.GetPathName(); UpdateData(FALSE); } } and this is the code for loading an image from resource but tht does not work for loading an image from file. i know LoadImage(); is used for this but how? how can i edit this code to load image from file. Plzz help..... void CFileOpenDlg::OnBnClickedButton2() { // TODO: Add your control notification handler code here CRect r; CBitmap* m_bitmap; CDC dc, *pDC; BITMAP bmp; m_bitmap = new CBitmap(); m_bitmap-LoadBitmapW(IDB_BITMAP1); m_bitmap-GetBitmap(&bmp); pDC = this-GetDC(); dc.CreateCompatibleDC(pDC); dc.SelectObject(m_bitmap); pDC-BitBlt(200, 200, bmp.bmWidth, bmp.bmHeight, &dc,0 , 0, SRCCOPY); m_bitmap-DeleteObject(); m_bitmap-Detach(); } waiting for reply........**

    Read the article

  • Saving a single entity instead of the entire context - revisited

    - by nite
    I’m looking for a way to have fine grained control over what is saved using Entity Framework, rather than the whole ObjectContext.SaveChanges(). My scenario is pretty straight forward, and I’m quite amazed not catered for in EF – pretty basic in NHibernate and all other data access paradigms I’ve seen. I’m generating a bunch of data (in a WPF UI) and allowing the user to fine tune what is proposed and choose what is actually committed to the database. For the proposed entities I’m: getting a bunch of reference entities (eg languages) via my objectcontext, creating the proposed entities and assigning these reference entities to them (as navigation properties), so by virtue of their relationship to the reference entities they’re implicitly added to the objectconext Trying to create & save individual entites based on the proposed entities. I figure this should be really simple & trivial but everything I’ve tried I’ve hit a brick wall, either I set up another objectcontext & add just the entity I need (it then tries to add the whole graph and fails as it’s on another objectcontext). I’ve tried MergeOptions = NoTracking on my reference entities to try to get the Attach/AddObject not to navigate through these to create a graph, no avail. I've removed the navigation properties from the reference entities. I've tried AcceptAllChanges, that works but pretty useless in practice as I do still want to track & save other entities. In a simple test, I can create 2 of my proposed entities, AddObject the one I want to save and then Detach the one I dont then call SaveChanges, this works but again not great in practice. Following are a few links to some of the nifty ideas which in the end don’t help in the end but illustrate the complexity of EF for something so simple. I’m really looking for a SaveSingle/SaveAtomic method, and think it’s a pretty reasonable & basic ask for any DAL, letalone a cutting edge ORM. http://stackoverflow.com/questions/1301460/saving-a-single-entity-instead-of-the-entire-context www.codeproject.com/KB/architecture/attachobjectgraph.aspx?fid=1534536&df=90&mpp=25&noise=3&sort=Position&view=Quick&select=3071122&fr=1 bernhardelbl.spaces.live.com/blog/cns!DB54AE2C5D84DB78!238.entry

    Read the article

  • Oddities in Linq-to-SQL generated code related to property change/changing events

    - by Lasse V. Karlsen
    I'm working on creating my own Linq-to-Sql generated classes in order to learn the concepts behind it all. I have some questions, if anyone knows the answer to one or more of these I'd be much obliged. The code below, and thus the questions, are from looking at code generated by creating a .DBML file in the Visual Studio 2010 designer, and inspecting the .Designer.cs file afterwards. 1. Why is INotifyPropertyChanging not passing the property name The event raising method is defined like this: protected virtual void SendPropertyChanging() Why isn't the name of the property that is changing passed to the event here? It is defined to be part of the EventArgs descendant that is passed to the event handler, but the method only passes an empty such value to it. 2. Why are the EntitySet<X> attach/detach methods not raising property changed? For an EntitySet<X> reference, the following two methods are generated: private void attach_EmailAddress1s(EmailAddress1 entity) { this.SendPropertyChanging(); entity.Person1 = this; } private void detach_EmailAddress1s(EmailAddress1 entity) { this.SendPropertyChanging(); entity.Person1 = null; } Why isn't SendPropertyChanged also called here? I'm sure I have more questions later, but for now these will suffice :)

    Read the article

  • Creating multiple instances of a generic database

    - by sagekilla
    Hi all, currently I'm trying to have a setup where a generic database is distributed to students. They would develop an application using this database (Say a shopping cart application), submit their project onto our server, and then it would be graded automatically. These databases are being run in Microsoft SQL Server 2005. We're using user instances to instantiate each database, and multiple requests could be serviced at once. But, the problem is when more than one student submitted a project to be graded, the first database to be instantiated would be the only one and would overwrite all other copies that were currently open. So if stu1 modified his database and stu2 and stu3 had their projects being graded concurrently, at the end of the grading stu1, stu2, and stu3 would have identical DB's at the end. Is there any way I can have multiple independent copies of a generic database, each of which I can load concurrently and modify without having any changes made to any one affecting the others? I did a little reading, and thought it might be possible to do something along the lines of: Student submits project Attach the database with unique db name (specified by student) Do all necessary operations Detach the database I'm unsure if this would fix our problem or be possible, so any help would be much appreciated!

    Read the article

  • std::thread and class constructor and destructor

    - by toeplitz
    When testing threads in C++11 I have created the following example: #include <iostream> #include <thread> class Foo { public: Foo(void) { std::cout << "Constructor called: " << this << std::endl; } ~Foo(void) { std::cout << "Destructor called: " << this << std::endl; } void operator()() const { std::cout << "Operatior called: " << this << std::endl; } }; void test_normal(void) { std::cout << "====> Standard example:" << std::endl; Foo f; } void test_thread(void) { std::cout << "====> Thread example:" << std::endl; Foo f; std::thread t(f); t.detach(); } int main(int argc, char **argv) { test_normal(); test_thread(); for(;;); } Which prints the following: Why is the destructor called 6 times for the thread? And why does the thread report different memory locations?

    Read the article

  • Prevent cached objects to end up in the database with Entity Framework

    - by Dirk Boer
    We have an ASP.NET project with Entity Framework and SQL Azure. A big part of our data only needs to be updated a few times a day, other data is very volatile. The data that barely changes we cache in memory at startup, detach from the context and than use it mainly for reading, drastically lowering the amount of database requests we have to do. The volatile data is requested everytime by a DbContext per Http request. When we do an update to the cached data, we send a message to all instances to catch a fresh version of all the data from the SQL server. So far, so good. Until we introduced a bug that linked one of these 'cached' objects to the 'volatile' data, and did a SaveChanges. Well, that was quite a mess. The whole data tree was added again and again by every update, corrupting the whole database with a whole lot of duplicated data. As a complete hack I added a completely arbitrary column with a UniqueConstraint and some gibberish data on one of the root tables; hopefully failing the SaveChanges() next time we introduce such a bug because it will violate the Unique Constraint. But it is of course hacky, and I'm still pretty scared ;P Are there any better ways to prevent whole tree's of cached objects ending up in the database? More information Project is ASP.NET MVC I cache this data, because it is mainly read only, and this saves a tons of extra database calls per http request

    Read the article

  • Simple Backup Strategy for Amazon EC2 instances / volumes?

    - by minerj
    You have entered Introductory Backups for Amazon EC2 EBS-backed Windows Images 010... I have been browsing my brains out to find a simple backup strategy for our single windows 2008 server running SharePoint Services. This is an EBS-backed image of one server with one data volume. I don’t need anything exotic. I only need a “daily” backup (losing a day’s worth of data is not catastrophic). We have created and saved an EBS backed AMI image (Windows 2008) we are comfortable using. We started off making backups by simply creating a new EBS AMI image. This is really simple, but the running server is put offline during the first 10 – 15 minutes of creating the image – not ideal. The standard way of creating backups would seem to be creating snapshots of volumes attached to a running instance. Again it’s pretty simple and the server remains usable during the snapshot generation. The apparent Catch-22 is that you can’t simply launch a new instance directly from a snapshot. I know how to bundle a running instance to S3 storage and then register the AMI from the S3 bucket. This does allow me to capture a backup of a running instance and, if the running instance is lost, register the AMI from the S3 bucket and launch the new AMI to recover the instance, but this seems really convoluted and it seems ridiculous to have to juggle back and forth between the AWS Console and the S3 Organizer plug-in for Firefox to get this accomplished. (Please don't mention the command line approach, this is an 010 level course). From playing around with EBS-backed images, the following approach appears to work for me (all done within the AWS Console): 1.For your backups, simply snapshot the system volume (/dev/sda1) as needed. 2.If you lose your running instance, do the following: a.Create a new volume from your last snapshot backup b.Launch another instance of your starting AMI (must be EBS-backed) c.Stop this instance. d.Detach the existing system volume from the new stopped instance and discard. e.Attach the newly created volume as system volume (/dev/sda1) to the stopped instance. f.Re-start the new instance. I have tested this out a couple of times and it seems to work for me. Question: Is there anything wrong with this approach?

    Read the article

  • Cygwin's RSYNC for large data transfer

    - by Tim Brigham
    I'm using rsync from Cygwin to do a large scale data transfer from an aging HP MSA 1000 to a new DAS attached to a different server. I have a daemon running on the remote server in read only mode and a local copy writing the files to disk. One of my servers is an image repository with over a million files spread across about 300 directories. Each file averages only a couple hundred kilobytes. More so than any other box this one is proving problematic. The rsync process will work for a while - some times 20 minutes, some times an hour - and then it simply quits and sits idle at a given file name. I have verified that the file isn't corrupt on the remote server and that the file is successfully created on the local drive. I ran the rsync client in -vv mode, which returns nothing. I checked out the logs created by the daemon. I looked at the network utilization on the interface, which is sitting idle. I looked at the AV settings to see if anything could pose a problem there. I even updated to the latest release of Cygwin. What do I need to in order to keep this connection up? EDIT: The client system is using the command rsync.exe server::Drives/f/Repo/ /cygdrive/T/Repo --archive -P -vv The server is using the command rsync.exe --daemon --no-detach --config "rsyncd.conf" The contents of rsyncd.conf: use chroot = false strict modes = false hosts allow = 192.168.100.9 log file = c:/rsyncd.log uid=0 gid=0 [Drives] path = /cygdrive read only = yes EDIT: The file server is 2003, the disk type on the array is GPT and the size is of the array is about 4 TB. EDIT: Stranger.. It looks like the process is reliably erroring out at about 175,000 files. Rsync runs fine when I pick the same directory it has problems with one at a time. EDIT: rsync version 3.0.9 protocol version 30 Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints, no socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes A similar failure occurred when going from the same set of files with Cygwin to a Linux install. It didn't happen until several hours later than normal however.

    Read the article

  • SQL Server database filled the hard drive and freeing up space isn't possible

    - by Jon
    I have a database in SQL Server 2008 on a 1Tb hard drive and it filled the drive, there is only 4Kb free. The MDF file is 323Gb and the LDF is 653Gb. The hard disk this DB is on has no other files on it other than the MDF and LDF so it's impossible to free up any space on the drive. The main hard disk is smaller but there is enough room to transfer the MDF to that drive, in case that helps. This server is overseas at a customer site and it's not possible at the moment to add more disk space to the server. It's also not possible to delete any records because the DB is in a failed mode (due to no disk space) and it doesn't respond to most commands. The Db is currently in full recovery mode which is why the LDF file is so large. This DB really doesn't need to be in full recovery so going forward we plan on switching it to simple mode which will save us a lot of space. I also don't care about losing the LDF file, but I need all of the data. I've spent a lot of time looking for a way out of this problem but everything I've found first involves either freeing up disk space or adding more disk space, neither of which is an option at this time. I'm stuck and any help would be greatly appreciated. I get the following log when trying to switch the DB to online mode. Msg 945, Level 14, State 2, Line 3 Database 'DBNAME' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. Msg 5069, Level 16, State 1, Line 3 ALTER DATABASE statement failed. Msg 1101, Level 17, State 12, Line 3 Could not allocate a new page for database 'DBNAME' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. I've found the following solutions but none work due to having no disk space on that drive, and since the DB is in a failed state I can't run most commmands. - DBCC SHRINKFILE - can't be run because doing a 'use DBNAME' fails - Detaching the DB and then changing the location of the MDF/LDF files, this fails because the DB is in an offline mode so you can't run detach. I'm at a loss about what else to try. Thanks.

    Read the article

< Previous Page | 4 5 6 7 8 9 10  | Next Page >