Search Results

Search found 16587 results on 664 pages for 'virtual hardware'.

Page 594/664 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Serializing QGraphicsScene contents

    - by Rob
    I am using the Qt QGraphicsScene class, adding pre-defined items such as QGraphicsRectItem, QGraphicsLineItem, etc. and I want to serialize the scene contents to disk. However, the base QGraphicsItem class (that the other items I use derive from) doesn't support serialization so I need to roll my own code. The problem is that all access to these objects is via a base QGraphicsItem pointer, so the serialization code I have is horrible: QGraphicsScene* scene = new QGraphicsScene; scene->addRect(QRectF(0, 0, 100, 100)); scene->addLine(QLineF(0, 0, 100, 100)); ... QList<QGraphicsItem*> list = scene->items(); foreach (QGraphicsItem* item, items) { if (item->type() == QGraphicsRectItem::Type) { QGraphicsRectItem* rect = qgraphicsitem_cast<QGraphicsRectItem*>(item); // Access QGraphicsRectItem members here } else if (item->type() == QGraphicsLineItem::Type) { QGraphicsLineItem* line = qgraphicsitem_cast<QGraphicsLineItem*>(item); // Access QGraphicsLineItem members here } ... } This is not good code IMHO. So, instead I could create an ABC class like this: class Item { public: virtual void serialize(QDataStream& strm, int version) = 0; }; class Rect : public QGraphicsRectItem, public Item { public: void serialize(QDataStream& strm, int version) { // Serialize this object } ... }; I can then add Rect objects using QGraphicsScene::addItem(new Rect(,,,)); But this doesn't really help me as the following will crash: QList<QGraphicsItem*> list = scene->items(); foreach (QGraphicsItem* item, items) { Item* myitem = reinterpret_class<Item*>(item); myitem->serialize(...) // FAIL } Any way I can make this work?

    Read the article

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • Qt and variadic functions

    - by Noah Roberts
    OK, before lecturing me on the use of C-style variadic functions in C++...everything else has turned out to require nothing short of rewriting the Qt MOC. What I'd like to know is whether or not you can have a "slot" in a Qt object that takes an arbitrary amount/type of arguments. The thing is that I really want to be able to generate Qt objects that have slots of an arbitrary signature. Since the MOC is incompatible with standard preprocessing and with templates, it's not possible to do so with either direct approach. I just came up with another idea: struct funky_base : QObject { Q_OBJECT funky_base(QObject * o = 0); public slots: virtual void the_slot(...) = 0; }; If this is possible then, because you can make a template that is a subclass of a QObject derived object so long as you don't declare new Qt stuff in it, I should be able to implement a derived templated type that takes the ... stuff and turns it into the appropriate, expected types. If it is, how would I connect to it? Would this work? connect(x, SIGNAL(someSignal(int)), y, SLOT(the_slot(...))); If nobody's tried anything this insane and doesn't know off hand, yes I'll eventually try it myself...but I am hoping someone already has existing knowledge I can tap before possibly wasting my time on it.

    Read the article

  • Simulating 3D 'cards' with just orthographic rendering

    - by meds
    I am rendering textured quads from an orthographic perspective and would like to simulate 'depth' by modifying UVs and the vertex positions of the quads four points (top left, top right, bottom left, bottom right). I've found if I make the top left and bottom right corners y position be the same I don't get a linear 'skew' but rather a warped one where the texture covering the top triangle (which makes up the quad) seems to get squashed while the bottom triangles texture looks normal. I can change UVs, any of the four points on the quad (but only in 2D space, it's orthographic projection anyway so 3D space won't matter much). So basically I'm trying to simulate perspective on a two dimensional quad in orthographic projection, any ideas? Is it even mathematically possible/feasible? ideally what I'd like is a situation where I can set an x/y rotation as well as a virtual z 'position' (which simulates z depth) through a function and see it internally calclate the positions/uvs to create the 3D effect. It seems like this should all be mathematical where a set of 2D transforms can be applied to each corner of the quad to simulate depth, I just don't know how to make it happen. I'd guess it requires trigonometry or something, I'm trying to crunch the math but not making much progress. here's what I mean: Top left is just the card, center is the card with a y rotation of X degrees and right most is a card with an x and y rotation of different degrees.

    Read the article

  • Creating Emulated iSCSI Target in a Lab/Testing Environment using Windows Server 2008 R2

    - by Brian McCleary
    We have a single server running Windows Server 2008 with Hyper-V installed running 5 virtual machines. I have purchased a second DELL R805 Server so that we can create a fail-over cluster to our current R805 that is currently in production. Right now, our R805 connects via iSCSI to a MD3000i iSCSI SAN. Before we try to roll out the second server and clustering to our production environment, I want to be able to test and "play with" the clustering features in our lab before rolling it out. The problem is that I don't want to spend a couple thousand dollars on another iSCSI SAN server just for testing. I already have two servers in my lab that are installed with Windows Server 2008 R2 64bit (one is the R805 and another spare desktop that was laying around) and with the Hyper-V roll enabled that should be ready to test with, but I don't have an iSCSI target to use as the Cluster Shared Volume. Is there anyway to install, either on a Hyper-V image or on a external spare computer that we have some sort of emulated iSCSI target? In our lab, we obviously don't need a real SAN, just something that we can test out how to setup the clustering properly outside of our production environment. Any advise is appreciated. FYI - I have read Jose Barret's blog on WUDSS at http://blogs.technet.com/josebda/archive/2008/01/07/installing-the-evaluation-version-of-wudss-2003-refresh-and-the-microsoft-iscsi-software-target-version-3-1-on-a-vm.aspx, but it seems awfully complex. I'm hoping for an easier solution.

    Read the article

  • Is it possible to unit test methods that rely on NHibernate Detached Criteria?

    - by Aim Kai
    I have tried to use Moq to unit test a method on a repository that uses the DetachedCriteria class. But I come up against a problem whereby I cannot actually mock the internal Criteria object that is built inside. Is there any way to mock detached criteria? Test Method [Test] [Category("UnitTest")] public void FindByNameSuccessTest() { //Mock hibernate here var sessionMock = new Mock<ISession>(); var sessionManager = new Mock<ISessionManager>(); var queryMock = new Mock<IQuery>(); var criteria = new Mock<ICriteria>(); var sessionIMock = new Mock<NHibernate.Engine.ISessionImplementor>(); var expectedRestriction = new Restriction {Id = 1, Name="Test"}; //Set up expected returns sessionManager.Setup(m => m.OpenSession()).Returns(sessionMock.Object); sessionMock.Setup(x => x.GetSessionImplementation()).Returns(sessionIMock.Object); queryMock.Setup(x => x.UniqueResult<SopRestriction>()).Returns(expectedRestriction); criteria.Setup(x => x.UniqueResult()).Returns(expectedRestriction); //Build repository var rep = new TestRepository(sessionManager.Object); //Call repostitory here to get list var returnR = rep.FindByName("Test"); Assert.That(returnR.Id == expectedRestriction.Id); } Repository Class public class TestRepository { protected readonly ISessionManager SessionManager; public virtual ISession Session { get { return SessionManager.OpenSession(); } } public TestRepository(ISessionManager sessionManager) { } public SopRestriction FindByName(string name) { var criteria = DetachedCriteria.For<Restriction>().Add<Restriction>(x => x.Name == name) return criteria.GetExecutableCriteria(Session).UniqueResult<T>(); } } Note I am using "NHibernate.LambdaExtensions" and "Castle.Facilities.NHibernateIntegration" here as well. Any help would be gratefully appreciated.

    Read the article

  • How to choose light version of database system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Express "I am afraid that my POS devices is poor with hardware for SQLExpress, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data types". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • How do I use IIS6 style metabase paths in IIS7 AppCmd tool?

    - by Mike Atlas
    I'm currently in the process of upgrading old II6 automation scripts that use the IISVdir tool to create/modify/update apps and virtual directories, and replacing them with AppCmd for IIS7. The IIS6, "IISVDir" commands reference paths in that are from the metabase, eg, "/W3SVC/1/ROOT/MyApp" - where 1 is ID of the "Default Web Site" site. The command doesn't actually require the display name of the site to make changes to it. This works well, since on a different language OS, the "Default Web Site" site name could be named, for example, "??? Web ???" or anything else for that matter. But this flexibility is lost if AppCmd can only reference "Default Web Site" via its name, and not a language-neutral identifier. So, how can I script AppCmd to refer to sites, vdirs and apps using language neutral identifiers to reference the "Default App Site"? Perhaps I need to start creating my own site instead, from the start, and name it something else specific, and stop using "Default Web Site" as the root? (Disclosure: I only have a IIS7-English machine that I am working on currently, but I have both IIS6-English and IIS6-Japanese machines for testing my old scripts - so perhaps it really is just "Default Web Site" still on Win2k8-Japanese?)

    Read the article

  • PHP: Best solution for links breaking in a mod_rewrite app

    - by psil
    I'm using mod rewrite to redirect all requests targeting non-existent files/directories to index.php?url=* This is surely the most common thing you do with mod_rewrite yet I have a problem: Naturally, if the page url is "mydomain.com/blog/view/1", the browser will look for images, stylesheets and relative links in the "virtual" directory "mydomain.com/blog/view/". Problem 1: Is using the base tag the best solution? I see that none of the PHP frameworks out there use the base tag, though. I'm currently having a regex replace all the relative links to point to the right path before output. Is that "okay"? Problem 2: It is possible that the server doesn't support mod_rewrite. However, all public files like images, stylesheets and the requests collector index.php are located in the directory /myapp/public. Normally mod_rewrite points all request to /public so it seems as if public was actually the root directory too all users. However if there is no mod_rewrite, I then have to point the users to /public from the root directory with a header() call. That means, however that all links are broken again because suddenly all images, etc. have to be called via /public/myimage.jpg Additional info: When there is no mod_rewrite the above request would look like this: mydomain.com/public/index.php/blog/view/1 What would be the best solutions for both problems?

    Read the article

  • MYSQL not running on Ubuntu OS - Error 2002.

    - by mgj
    Hi, I am a novice to mysql DB. I am trying to run the MYSQL Server on Ubuntu 10.04. Through Synaptic Package Manager I am have installed the mysql version: mysql-client-5.1 I wonder that how was the database password set for the mysql-client software that I installed through the above way.It would be nice if you could enlighten me on this. When I tried running this database, I encountered the error given below: mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) mohnish@mohnish-laptop:/var/lib$ I referred to a similar question posted by another user. I didn't find a solution through the proposed answers. For instance when I tried the solutions posted for the similar question I got the following: mohnish@mohnish-laptop:/var/lib$ service start mysqld start: unrecognized service mohnish@mohnish-laptop:/var/lib$ ps -u mysql ERROR: User name does not exist. ********* simple selection ********* ********* selection by list ********* -A all processes -C by command name -N negate selection -G by real group ID (supports names) -a all w/ tty except session leaders -U by real user ID (supports names) -d all except session leaders -g by session OR by effective group name -e all processes -p by process ID T all processes on this terminal -s processes in the sessions given a all w/ tty, including other users -t by tty g OBSOLETE -- DO NOT USE -u by effective user ID (supports names) r only running processes U processes for specified users x processes w/o controlling ttys t by tty *********** output format ********** *********** long options *********** -o,o user-defined -f full --Group --User --pid --cols --ppid -j,j job control s signal --group --user --sid --rows --info -O,O preloaded -o v virtual memory --cumulative --format --deselect -l,l long u user-oriented --sort --tty --forest --version -F extra full X registers --heading --no-heading --context ********* misc options ********* -V,V show version L list format codes f ASCII art forest -m,m,-L,-T,H threads S children in sum -y change -l format -M,Z security data c true command name -c scheduling class -w,w wide output n numeric WCHAN,UID -H process hierarchy mohnish@mohnish-laptop:/var/lib$ which mysql /usr/bin/mysql mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I even tried referring to http://forums.mysql.com/read.php?11,27769,84713#msg-84713 but couldn't find anything useful. Please let me know how I could tackle this error. Thank you very much..

    Read the article

  • Risking the exception anti-pattern.. with some modifications

    - by Sridhar Iyer
    Lets say that I have a library which runs 24x7 on certain machines. Even if the code is rock solid, a hardware fault can sooner or later trigger an exception. I would like to have some sort of failsafe in position for events like this. One approach would be to write wrapper functions that encapsulate each api a: returnCode=DEFAULT; try { returnCode=libraryAPI1(); } catch(...) { returnCode=BAD; } return returnCode; The caller of the library then restarts the whole thread, reinitializes the module if the returnCode is bad. Things CAN go horribly wrong. E.g. if the try block(or libraryAPI1()) had: func1(); char *x=malloc(1000); func2(); if func2() throws an exception, x will never be freed. On a similar vein, file corruption is a possible outcome. Could you please tell me what other things can possibly go wrong in this scenario?

    Read the article

  • Really strange problem with jailbroken iPhone 4. Lags but fixes by running Cydia a second

    - by Charkel
    I have jailbreaked my iPhone 4 iOS 4.1 with both limera1n and greenpois0n and the outcome is the same. My phone is really slow opening apps. (including Phone.app an SMS.app) they take about 10-15seconds to load. BUT if I start Cydia for only one second everything goes back to normal. (Please note that I have to have 3G or WI-FI access to do this) Until I let my phone sit for about 15 minutes then it's the same again. Please note that if I don't exit the apps they seem to continue working but with a bit of lag. Cydia also seems to be the only app not affected by the delay. "Loding Data" appears as quick as normal. These issues did not start to appear until about 2 weeks after I jailbreaked my phone. I have tried running "speed test" while it's in its slow state. I start the app and the app loads for like 15 seconds and then the result shows everyhting is good. All green. Since Cydia seems for fix it temporarily i figured it's jailbreak-related and not hardware-related.

    Read the article

  • What's the correct way to do a "catch all" error check on an fstream output operation?

    - by Truncheon
    What's the correct way to check for a general error when sending data to an fstream? UPDATE: My main concern regards some things I've been hearing about a delay between output and any data being physically written to the hard disk. My assumption was that the command "save_file_obj << save_str" would only send data to some kind of buffer and that the following check "if (save_file_obj.bad())" would not be any use in determining if there was an OS or hardware problem. I just wanted to know what was the definitive "catch all" way to send a string to a file and check to make certain that it was written to the disk, before carrying out any following actions such as closing the program. I have the following code... int Saver::output() { save_file_handle.open(file_name.c_str()); if (save_file_handle.is_open()) { save_file_handle << save_str.c_str(); if (save_file_handle.bad()) { x_message("Error - failed to save file"); return 0; } save_file_handle.close(); if (save_file_handle.bad()) { x_message("Error - failed to save file"); return 0; } return 1; } else { x_message("Error - couldn't open save file"); return 0; } }

    Read the article

  • Why does this test fail?

    - by Tomas Lycken
    I'm trying to test/spec the following action method public virtual ActionResult ChangePassword(ChangePasswordModel model) { if (ModelState.IsValid) { if (MembershipService.ChangePassword(User.Identity.Name, model.OldPassword, model.NewPassword)) { return RedirectToAction(MVC.Account.Actions.ChangePasswordSuccess); } else { ModelState.AddModelError("", "The current password is incorrect or the new password is invalid."); } } // If we got this far, something failed, redisplay form return RedirectToAction(MVC.Account.Actions.ChangePassword); } with the following MSpec specification: public class When_a_change_password_request_is_successful : with_a_change_password_input_model { Establish context = () => { membershipService.Setup(s => s.ChangePassword(Param.IsAny<string>(), Param.IsAny<string>(), Param.IsAny<string>())).Returns(true); controller.SetFakeControllerContext("POST"); }; Because of = () => controller.ChangePassword(inputModel); ThenIt should_be_a_redirect_result = () => result.ShouldBeARedirectToRoute(); ThenIt should_redirect_to_success_page = () => result.ShouldBeARedirectToRoute().And().ShouldRedirectToAction<AccountController>(c => c.ChangePasswordSuccess()); } where with_a_change_password_input_model is a base class that instantiates the input model, sets up a mock for the IMembershipService etc. The test fails on the first ThenIt (which is just an alias I'm using to avoid conflict with Moq...) with the following error description: Machine.Specifications.SpecificationException: Should be of type System.RuntimeType but is [null] But I am returning something - in fact, a RedirectToRouteResult - in each way the method can terminate! Why does MSpec believe the result to be null?

    Read the article

  • How to use the LINQ where expression?

    - by NullReference
    I'm implementing the service \ repository pattern in a new project. I've got a base interface that looks like this. Everything works great until I need to use the GetMany method. I'm just not sure how to pass a LINQ expression into the GetMany method. For example how would I simply sort a list of objects of type name? nameRepository.GetMany( ? ) public interface IRepository<T> where T : class { void Add(T entity); void Update(T entity); void Delete(T entity); void Delete(Expression<Func<T, bool>> where); T GetById(long Id); T GetById(string Id); T Get(Expression<Func<T, bool>> where); IEnumerable<T> GetAll(); IEnumerable<T> GetMany(Expression<Func<T, bool>> where); } public virtual IEnumerable<T> GetMany(Expression<Func<T, bool>> where) { return dbset.Where(where).ToList(); }

    Read the article

  • Advantages of Using Linux as primary developer desktop

    - by Nick N
    I want to get some input on some of the advantages of why developers should and need to use Linux as their primary development desktop on a daily basic as opposed to using Windows. This is particulary helpful when your Dev, QA, and Production environments are Linux. The current analogy that I keep coming back to is. If I build my demo car as a Ford Escort, but my project car is a Ford Mustang, it doesn't make sense at all. I'm currently at an IT department that allows dual boot with Windows and Linux, but some run Linux while the vast majority use Windows. Here's several advantages that I've came up with since using Linux as a primary desktop. Same Exact operating system as Dev, QA, and Production Same Scripts (.sh) instead of maintaining (.bat and *.sh). Somewhat mitigated by using cygwin, but still a bit different. Team learns simple commands such as: cd, ls, cat, top Team learns Advanced commands like: pkill, pgrep, chmod, su, sudo, ssh, scp Full access to installs typically for Linux, such as RPM, DEB installs just like the target environments. The list could go on and on, but I want to get some feedback of anything that I may have missed, or even any disadvantages (of course there are some). To me it makes sense to migrate an entire team over to using Linux, and using Virtual Box, running Windows XP VM's to test functional items that 95% of most of the world uses. This is similar but a little different thread going on here as well. link text

    Read the article

  • 'Bank Switching' Sprites on old NES applications

    - by Jeffrey Kern
    I'm currently writing in C# what could basically be called my own interpretation of the NES hardware for an old-school looking game that I'm developing. I've fired up FCE and have been observing how the NES displayed and rendered graphics. In a nutshell, the NES could hold two bitmaps worth of graphical information, each with the dimensions of 128x128. These are called the PPU tables. One was for BG tiles and the other was for sprites. The data had to be in this memory for it to be drawn on-screen. Now, if a game had more graphical data then these two banks, it could write portions of this new information to these banks -overwriting what was there - at the end of each frame, and use it from the next frame onward. So, in old games how did the programmers 'bank switch'? I mean, within the level design, how did they know which graphic set to load? I've noticed that Mega Man 2 bankswitches when the screen programatically scrolls from one portion of the stage to the next. But how did they store this information in the level - what sprites to copy over into the PPU tables, and where to write them at? Another example would be hitting pause in MM2. BG tiles get over-written during pause, and then get restored when the player unpauses. How did they remember which tiles they replaced and how to restore them? If I was lazy, I could just make one huge static bitmap and just grab values that way. But I'm forcing myself to limit these values to create a more authentic experience. I've read the amazing guide on how M.C. Kids was made, and I'm trying to be barebones about how I program this game. It still just boggles my mind how these programmers accomplisehd what they did with what they had. EDIT: The only solution I can think of would be to hold separate tables that state what tiles should be in the PPU at what time, but I think that would be a huge memory resource that the NES wouldn't be able to handle.

    Read the article

  • How to tell what account my webservice is running under in Visual Studio 2005

    - by John Galt
    I'm going a little nuts trying to understand the doc on impersonation and delegation and the question has come up what account my webservice is running under. I am logged as myDomainName\johna on my development workstation called JOHNXP. From Vstudio2005 I start my webservice via Debug and the wsdl page comes up in my browser. From Task Manager, I see the following while sitting at a breakpoint in my .asmx code: aspnet_wp.exe pid=1316 UserName=ASPNET devenv.exe pid=3304 UserName=johna The IIS Directory Security tab for the Virtual Directory that hosts my ws.asmx code has "Enable Anonymous access" UNCHECKED and has "Integrated Windows Authentication" CHECKED. So when the MSDN people state "you must configure the user account under which the server process runs", what would they be refering to in the case of my little webservice described above? I am quoting from: http://msdn.microsoft.com/en-us/library/aa302400.aspx Ultimately, I want this webservice of mine to impersonate whatever authenticated domain user browses through to an invoke of my webservice. My webservice in turn consumes another ASMX webservice on a different server (but same domain). I need this remote webservice to use the impersonated domain user credentials (not those of my webservice on JOHNXP). So its getting a little snarly for me to understand this and I see I am unclear about the account my web service uses. I think it is ASPNET in IIS 5.1 on WinXP but not sure.

    Read the article

  • DotNetNuke and Subversion guidelines

    - by David Stratton
    I've Googled, Binged, and here at StackOverflow, looked through the related questions and searched, but I'm not finding what I'm looking for. I've also searched documentation on DNN. What I'm looking for is any guidance (tutorials, blogs, step-by-step instructions for setting up a repository) etc from people who are experienced in using DotNetNuke with SVN. We use SVN for all our source control, and have no problem with standard applications, because we pretty much built the repository and directory structure to work with our processes. This means when we do web sites, in Visual Studio, we do file based web sites, rather than setting them up in the local IIS. It just makes things easier for us. However, with DNN, it appears that even if you get the source code, it is expecting to be set up in the local IIS, which means additional headaches for us. For example, we are moving all of our source code off our local C drives, and onto a shared drive on a server. This is to enable backups in addition to our normal source control. (This was a management decision). So that means that we need to change the virtual web app when we make the move. Has anyone come up with a good way to work around this? Can DNN be set up so that the developer web server in Visual Studio can be used, so that we can treat it just like any normal web app? Am I missing something obvious? Edit - added I'm willing to accept answers like "We tried it and never got it to work", and "It can't be done" as answers. I'm always open to hearing "It can't be done the way you want. You need to change your procedures to match how it works" if necessary. I guess if you've got experience trying this and just couldn't get it to work, I can learn from your experience that way as well, but some detail would be good.

    Read the article

  • Dynamically overriding an abstract method in c#

    - by ng
    I have the following abstract class public abstract class AbstractThing { public String GetDescription() { return "This is " + GetName(); } public abstract String GetName(); } Now I would like to implement some new dynamic types from this like so. AssemblyName assemblyName = new AssemblyName(); assemblyName.Name = "My.TempAssembly"; AssemblyBuilder assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly(assemblyName, AssemblyBuilderAccess.Run); ModuleBuilder moduleBuilder = assemblyBuilder.DefineDynamicModule("DynamicThings"); TypeBuilder typeBuilder = moduleBuilder.DefineType(someName + "_Thing", TypeAttributes.Public | TypeAttributes.Class, typeof(AbstractThing)); MethodBuilder methodBuilder = typeBuilder.DefineMethod("GetName", MethodAttributes.Public | MethodAttributes.ReuseSlot | MethodAttributes.Virtual | MethodAttributes.HideBySig, null, Type.EmptyTypes); ILGenerator msil = methodBuilder.GetILGenerator(); msil.EmitWriteLine(selectionList); msil.Emit(OpCodes.Ret); However when I try to instantiate via typeBuilder.CreateType(); I get an exception saying that there is no implementation for GetName. Is there something I am doing wrong here. I can not see the problem. Also, what would be the restrictions on instantiating such a class by name? For instance if I tried to instantiate via "My.TempAssembly.x_Thing" would it be availble for instantiation without the Type generated?

    Read the article

  • Create a new app pool and assign it to a site subfolder on a remote host, using C# and IIS7

    - by Soeren
    I have a web site running on IIS7 on a remote server. I would like to do the following: Create a new subfolder under the root virtual directory. Create a new app pool. Add this new app pool to the new subfolder Normally, I would do this manually in IIS by first creating the app pool, and then right-clicking the sub folder an choose "add application", but I need to do this programmatically in C#. I've managed to make the above points 1 and 2 work, but I can't find the way to adding the application to the sub folder. This is the code I have used so far for 1 and 2: ServerManager mgr = new ServerManager(); ApplicationPool myAppPool = mgr.ApplicationPools.Add("MyAppPool"); myAppPool.AutoStart = true; myAppPool.Cpu.Action = ProcessorAction.KillW3wp; myAppPool.ManagedPipelineMode = ManagedPipelineMode.Integrated; myAppPool.ManagedRuntimeVersion = "V4.0"; myAppPool.ProcessModel.IdentityType = ProcessModelIdentityType.NetworkService; mgr.CommitChanges(); if (!Directory.Exists(@"D:\webroot\TestSite\NytSite")) { Directory.CreateDirectory(@"D:\webroot\TestSite\NytSite"); } So, I need to add "MyAppPool" to the "NytSite" folder... Is this even the correct way to do this? Any experiences out there? Thnx

    Read the article

  • Django notification get date one accesses a link

    - by dana
    hi there, i'm making a notification system, so that a user in a virtual community to be announced when someone sends him a message, or starts following him (the follow relation is like a friend relation, but it is not necessarily reciprocal) my view function: def notification_view(request, last_checked): u = Relation.objects.filter(date_follow>Notification.objects.get(last_checked=last_checked)) v = Message.objects.filter(date>Notification.objects.get(last_checked=last_checked)) if u: notification_type = follow if notice_settings == receive_notification or notice_settings == only_follow following = u if v: notification_type = message if notice_settings == receive_notification or notice_settings == only_messages message = v return render_to_response('notification/notification.html', { 'following': following, 'message':message, }, context_instance=RequestContext(request)) the models.py: class NoticeType(models.Model): follow = models.ForeignKey(Relations, editable = False) message = models.ForeignKey(Messages) classroom_invitation = models.ForeignKey(Classroom) class Notification(models.Model): receiver = models.ForeignKey(User, editable=False) date = models.DateTimeField(auto_now=True, editable = False) notice_type = models.ForeignKey(NoticeType, editable = False, related_name = "notification_type") sent = models.BooleanField(default = True) last_checked = models.DateTimeField(auto_now=True, editable = False) class NotificationSettings(models.Model): user = models.ForeignKey(User) receive_notifications = models.BooleanField(default = True) only_follow = models.BooleanField(default = False) only_message = models.BooleanField(default = False) only_classroom = models.BooleanField(default = False) #receive_on_email = models.BooleanField(default = False) my problem is: i want last_checked to be the time when someone acceses a link (the notification link). How can i possibily save that time? how can i get it? thanks in avance!

    Read the article

  • authentication question (security code generation logic)

    - by Stick it to THE MAN
    I have a security number generator device, small enough to go on a key-ring, which has a six digit LCD display and a button. After I have entered my account name and password on an online form, I press the button on the security device and enter the security code number which is displayed. I get a different number every time I press the button and the number generator has a serial number on the back which I had to input during the account set-up procedure. I would like to incorporate similar functionality in my website. As far as I understand, these are the main components: Generate a unique N digit aplha-numeric sequence during registration and assign to user (permanently) Allow user to generate an N (or M?) digit aplha-numeric sequence remotely For now, I dont care about the hardware side, I am only interested in knowing how I may choose a suitable algorithm that will allow the user to generate an N (or M?) long aplha-numeric sequence - presumably, using his unique ID as a seed Identify the user from the number generated in step 2 (which decryption method is the most robust to do this?) I have the following questions: Have I identified all the steps required in such an authentication system?, if not please point out what I have missed and why it is important What are the most robust encryption/decryption algorithms I can use for steps 1 through 3 (preferably using 64bits)?

    Read the article

  • how to make war file take up less memory

    - by Myy
    I need help on how to decrease the memory usage of my web app. so I can fit more into my webserver. so I'm building a java web app with JSF 2.0 developing in eclipse helios and running on an Apache tomcat Server. And I have a dedicated virtual server with a tomcat aswell where I deploy these war files. the webApp is about 35MB in size ( it has a lot of jars and such) but when I deploy it to my tomcat webserver, I can see it takes about 300MB of RAM, is this normal? my dedicated server only has 2GB of ram from which normally have 1 to use. so I as soon as I deploy 3 apps I get an OOM error, I've gotten permgen OOM and a out of swamp Memory error; to fix this I upped my MaxPermGen to about a gig and resytarted the server to get back some swamp space. so I tried deploying smaller older apps ( about 15MB) and they take up waay less memory. If I have 1 GB of ram I want to be able to fit more apps into my webserver without getting any OOM Errors. now I found this stack overflow Question, Can that be applied to my case? and if so, which are the common folders in the tomcat server? anyone done this before or have a different more effective, not so complicated approach? Any ideas, and or commets are more than appreciated. Thanks! Myy

    Read the article

  • Java try finally variations

    - by Petr Gladkikh
    This question nags me for a while but I did not found complete answer to it yet (e.g. this one is for C# http://stackoverflow.com/questions/463029/initializing-disposable-resources-outside-or-inside-try-finally). Consider two following Java code fragments: Closeable in = new FileInputStream("data.txt"); try { doSomething(in); } finally { in.close(); } and second variation Closeable in = null; try { in = new FileInputStream("data.txt"); doSomething(in); } finally { if (null != in) in.close(); } The part that worries me is that the thread might be somewhat interrupted between the moment resource is acquired (e.g. file is opened) but resulting value is not assigned to respective local variable. Is there any other scenarios the thread might be interrupted in the point above other than: InterruptedException (e.g. via Thread#interrupt()) or OutOfMemoryError exception is thrown JVM exits (e.g. via kill, System.exit()) Hardware fail (or bug in JVM for complete list :) I have read that second approach is somewhat more "idiomatic" but IMO in the scenario above there's no difference and in all other scenarios they are equal. So the question: What are the differences between the two? Which should I prefer if I do concerned about freeing resources (especially in heavily multi-threading applications)? Why? I would appreciate if anyone points me to parts of Java/JVM specs that support the answers.

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >