Search Results

Search found 66916 results on 2677 pages for 'real time strategy'.

Page 410/2677 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • Is there a way to get a shared spreadsheet to update without closing and reopening?

    - by Mike
    Using Excel 2010, I have a spreadsheet that is used by 3 different people at any one time. But if one person has the spreadsheet open on there PC the other people can only view it as read only. I have since shared the workbook and put the spreadsheet on a shared network drive and now they can all view the spreadsheet at the same time and edit it at the same time. The problem is that nobody can see the changes that the other users have made unless the close out of the spreadsheet and open it up again to view the changes. I have checked the settings of the shared workbook and on the advanced tab have tick the option that updates the information every 5 minutes but the information does not update until you close out and open the spreadsheet back up again. How can I fix this problem?

    Read the article

  • Which OS should I boot into for virtualization?

    - by acidzombie24
    This might be a silly question. I use windows 7 99% of the time. I run linux 10% of the time and XP 5% of the time. I am thinking about getting a Intel® Core™ i7-2600 Processor which has hardware support for virtualization. I dont think i want more than one partition. May have a swap partition. Which OS should I make my primary (and only) partition? I suspect windows7 if i am always using it as going through a linux layer would slow it down. Does it matter much which OS i use if i have hardware support for virtualization? At the moment I am using VMWare player. I suspect software doesnt effect performance?

    Read the article

  • Nvidia driver on Windows 7 causing black screen

    - by inKit
    I have just installed Windows 7 on a desktop machine and for the first time ever have had a really tough time doing so, its normally a nice smooth install. This time I found that the monitor would simply go black after completing the installation. I tried reinstalling about 3 times and this did not help. After much searching I discovered that it was the nvidia drivers that were playing up with win 7, so i booted into safe mode, disabled the device, then rebooted to complete the installation. Windows 7 now works fine as long as the nvidia 9600 gt video card is disabled. The moment I enable it, the system requires a reboot and the screen will go black before even getting to the log in screen. I have tried downloading the latest driver and installing it manually, I have also tried uninstalling the device and allowing windows 7 to install it itself. Nothing seems to work. any clues?

    Read the article

  • sticky bit on NFS file system

    - by Kris_R
    I have a system where to the main server (homes, nfs, ntp, queue...) can log-in only root – all the other users use front-end host with NFS-mounted home directories (RW) and all other software directories (read-only). My problem is, that time to time, if root or normal user with sudo makes some administrative works on front-end some homes of normal users getting sticky bits (drwsr-sr-x). If it happens usually the user can't log-in (as long as permission for his home are not changed to drwxr-xr-x). The last time I saw it after compiling some new software (normal user configure;make) and installation from the same directory as root (su and make install or direct as normal user sudo make install). Can somebody explain me why it happens and what should I do to get rid of this problem? p.s. I'm using CentOS 5.7

    Read the article

  • Timing startup processes?

    - by acidzombie24
    My Windows 7 use to boot up fast and now its getting rather slow. I suspect one program is eating up all the time yet i cant tell what it is since task manager shows <40% of the cpu being used. What can i use to track how long each process takes when my computer boots/starts up? Note: Except for launchy which i used before my comp became slow, all my startup and services are all signed and known (broadcom, VMWare, Google Inc, Intel, etc) Note2: I am mostly considering the time it takes after i login but i suspect the time before that is slightly slower (i dont think very much though)

    Read the article

  • php.ini settings change not taking effect for large file uploads

    - by user51347
    My server was just reprovisioned, and my application which uploads large (100M+) files now breaks upon re-installation. the symptom is quite consistent : smaller files (8mb in my tests) upload just fine. Larger files cut off at quite close to the same point every time given a particular computer. a file that fails at 26% will fail at just about the same every time. One that takes 1:40s to fail will take within 2 seconds of that every time, before failure. I have set my php.ini settings extravagently : post_max_size = 512M upload_max_filesize = 512M max_input_time = 3600 max_execution_time 3600 Is there possibly a setting at the Apache Level which would override PHP?

    Read the article

  • Nvidia driver on Windows 7 causing black screen

    - by inKit
    I have just installed Windows 7 on a desktop machine and for the first time ever have had a really tough time doing so, its normally a nice smooth install. This time I found that the monitor would simply go black after completing the installation. I tried reinstalling about 3 times and this did not help. After much searching I discovered that it was the nvidia drivers that were playing up with win 7, so i booted into safe mode, disabled the device, then rebooted to complete the installation. Windows 7 now works fine as long as the nvidia 9600 gt video card is disabled. The moment I enable it, the system requires a reboot and the screen will go black before even getting to the log in screen. I have tried downloading the latest driver and installing it manually, I have also tried uninstalling the device and allowing windows 7 to install it itself. Nothing seems to work. any clues?

    Read the article

  • Does multi-platter hard-drive use all of their heads to read simultaneously?

    - by WiSaGaN
    Suppose we have a harddisk with 2 platters with characteristics below: Rotational rate: 10, 000 RPM Avg sectors/track: 1000 Surfaces: 4 Sector size: 512 bytes I was reading "Computer Systems: A Programmer's Perspective 2ed" when I found that it calculates transfer time as if it only uses ONE head to read a sector. If that's the case, why not use 4 heads to write(read) on 4 surfaces? So when I write a 2K bytes file, each head should only need to wait for the platters to rotate just one sector length instead of 4, thus reducing the transfer time by a factor of 4. Or even redesign sector to make each sector on one cylinder but on 4 tracks residing same position respectively on 4 surfaces. Each one of (512/4) bytes. So when the hd needs to read a sector of 512 bytes, we only need the disk to rotate roughly 1/4 compare to original time. The idea looks like RAID 0.

    Read the article

  • Remote location's status "Disconnected", how to keep it connected (OK)? [net-use-command]

    - by AZ
    I have an script that keeps running, under some scenarios, it needs to contact a server (\\us-sign). From time to time, if this server remains uncontacted for a while, the next time my script needs it, it will ask for my credentials. What I found is that, after such thing happening and using net use, such server will be displayed as disconnected. if I type "net use \\us-sign", it won't ask me neither user nor password, what makes me believe, my credentials for such server are still "valid", nonetheless, its status will remain "disconnected". This script is supposed to help us automate some procedures, but the need to keep a watch on it shall it request for credentials, it kind of defeats the purpose. How can I keep its status "OK" no matter how long it is not being contacted?

    Read the article

  • How to remove permanent map of a network drive on OS X Lion?

    - by Flijfi
    Some time ago I mapped a network drive on my Snow Leopard Mac, which was upgraded to Lion. The network drive is not active any more and I receive popups all the time with the error: There was a problem connecting to the server XXXX. I have no idea how I configured at the time. I may have included a mount command, in a config file but I don't know any more where I did it. I reviewed the Preferences/Account/Login items and there is no permanent mapping there. OSX is updated as Nov 27,2011 and the issue is not related to the upgrade to Lion itself but to a misconfiguration. Any help will be greatly appreciated. (If you have the opposite problem, here is the link to solve it: Permanently map a network drive on Mac OS X Leopard)

    Read the article

  • Total network data sent/received of a non-daemon Linux process?

    - by leden
    I'm looking for a simple and effective way of measuring total bytes received/sent from a single process upon its termination. Basically, I am looking for a tool which has the interface similar to "time" and "/usr/bin/time", e.g. measure-net-data <prog_to_run> <prog_args> Received (b): XYZ Sent (b): ABC I know that there are many tools for bandwidth/network monitoring, but as far I can tell all of them are performing the measurements it real-time, which is inappropriate not only because of overhead but also because of the inconvenience - I would need to stop the program, capture the output of the tool and then kill it. I have seen that newer versions of Linux 2.6.20+ provide /proc/<pid>/io/ which contain the information I'm looking for; however, everything under /proc/<pid> when the process terminates, so I'm again back to the same problem as with any network monitoring tool.

    Read the article

  • Why my backup to USB Drive is too slow?

    - by Jonas
    I have tried several backup solutions for my data and none of them was good enough. I basically want to make a copy of my files to an attached USB Drive from time to time. I don't mind starting my backups manually, since the USB Drive is not always connected. My problem is that my data contains a lot of files (a huge amount), so backing up takes forever (more than 20 hours). Using "rsync" an other similar solutions is not working because the I/O needed to check the file for changes takes longer than the time to actually copy it. Any suggestions?

    Read the article

  • My microphone is too quiet

    - by AmitM9S6
    I've been chatting with skype for a long time.. and I don't know why, but for X time it's fine, and then it's too quiet.. Here how it goes : I'm booting the PC - getting into a skype call with friends - talking with friends on skype - (everything goes well) - Closing the skype call - Going back to a skype call like a hour after -. They can't hear me well, I have to shout so they'll hear me, and in the settings, it's on 100% + 30 DECIBELS, so I can't raise it higher... and if I'll re-boot my PC, it'll be well until I'll close the skype call again.. Every time I'm skyping with friends well, and then when I close the call and come back after a while, they hear me really low(volume).. I've been trying to fix this problem for about 2 months! If you'll be able to help me, I'd be so happy. (btw It's connected not through USB, but through a microphone connection and headphone connection...) Also, I got Windows 7..

    Read the article

  • My microphone is too quiet (seems to be connected with Skype)

    - by AmitM9S6
    I've been chatting with skype for a long time.. and I don't know why, but for X time it's fine, and then it's too quiet.. Here how it goes : I'm booting the PC - getting into a skype call with friends - talking with friends on skype - (everything goes well) - Closing the skype call - Going back to a skype call like a hour after -. They can't hear me well, I have to shout so they'll hear me, and in the settings, it's on 100% + 30 DECIBELS, so I can't raise it higher... and if I'll re-boot my PC, it'll be well until I'll close the skype call again.. Every time I'm skyping with friends well, and then when I close the call and come back after a while, they hear me really low(volume).. I've been trying to fix this problem for about 2 months! If you'll be able to help me, I'd be so happy. (btw It's connected not through USB, but through a microphone connection and headphone connection...) Also, I got Windows 7..

    Read the article

  • NLP with greatly contrained input and abilities

    - by Mike F
    Hat in hand here. I'm a seasoned developer and I would be grateful for a bit of help. I don't have time to read or digest long intricate discussions on theoretical concepts around NLP (or go get my PHD). That said, I have read a few and it's a damn interesting field. The problem is I need real world solutions, for real world products, in real world time frames. The problem I'm having is right now I'm not sure what the right questions are to ask to get started implementing. I believe this is mostly related to vocabulary. I'll read somewhere, a blog post, a forum post, a whitepaper, and it says, I'm doing flooping with the blargy blarg method, and I go google flooping and blargy blarg, and I get references to more obscurity. It seemingly never ends. So, my question is multiphased. First, more generally, how do I become passingly educated on this quickly? Just in time educated. I only need to know what I need to know to take the next step. I've spent 20 years writing code. Explain quick. I'll get it. (I mean provide a reference to something that explains quickly of course). I'm happy to read the right book, but I don't want to read a book where I read the chapter introduction that explains what floopy floop is and then skip over the rest of the chapter with examples of floopy flooping (because now I get what it is). I also don't want to read a book that goes into too much detail with theoretical underpinnings or history. For example, the Jurafsky book seems like way more than I need: http://www.amazon.com/gp/product/0131873210. But I will read it if this is the right book to read. (It's also dang expensive!) I need the root node of the expedited learning tree here, if you will. Point me in the right direction and I'll be quite grateful. I'm expecting quite a lot of firehose drinking - I just need the right firehose. Second, what I need to do is take a single sentence, with a very reduced vocabulary, and get a grammar tree (sorry if this is the wrong terminology) that I can do something with. I know I could easily write this command line input style in c in a more conventional manner, but I need it to be way better than that. But I don't need a chatterbot either. What I'm doing needs to live in a constrained environment. I can't use Python (unfortunately). I can't ship with gigabytes of corpuses. I need any libraries I use to be in c/c++. If I have to write this myself, I will. Hopefully, it will be achievable considering the reduced problem set. Maybe, probably, that's just naive. If so, let me know. :-) Thanks in advance - Mike

    Read the article

  • HOWTO: Migrate SharePoint site from one farm to another?

    - by Ramiz Uddin
    Hi Everyone, I've a site deployed on a developed machine. The site was developed under WSS 3.0 which contains custom List, Features, Templates, Styles etc. What I've to do is to create a deployment package (setup) which I can give away to my client. I know about stsadm but I don't have the access of the production machine. Is there a way I can package all the dependencies in a single file (installation file) and run on the server which will include all the dependencies (including site content)? I've tried to experiment this with SharePoint Content Deployment Wizard. It all went well when Export the site but always fail to Import with the following message: [2/2/2010 3:43:25 PM]: Start Time: 2/2/2010 3:43:25 PM. [2/2/2010 3:43:25 PM]: Progress: Initializing Import. [2/2/2010 3:43:42 PM]: FatalError: Could not find WebTemplate #75805 with LCID 1033. at Microsoft.SharePoint.Deployment.ImportRequirementsManager.VerifyWebTemplate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.Validate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.DeserializeAndValidate() at Microsoft.SharePoint.Deployment.SPImport.VerifyRequirements() at Microsoft.SharePoint.Deployment.SPImport.Run() [2/2/2010 3:43:48 PM]: Progress: Import Completed. [2/2/2010 3:43:48 PM]: Finish Time: 2/2/2010 3:43:48 PM. [2/2/2010 3:43:48 PM]: Completed with 0 warnings. [2/2/2010 3:43:48 PM]: Completed with 1 errors. [2/2/2010 3:44:51 PM]: Start Time: 2/2/2010 3:44:51 PM. [2/2/2010 3:44:51 PM]: Progress: Initializing Import. [2/2/2010 3:45:08 PM]: FatalError: Could not find WebTemplate #75805 with LCID 1033. at Microsoft.SharePoint.Deployment.ImportRequirementsManager.VerifyWebTemplate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.Validate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.DeserializeAndValidate() at Microsoft.SharePoint.Deployment.SPImport.VerifyRequirements() at Microsoft.SharePoint.Deployment.SPImport.Run() [2/2/2010 3:45:14 PM]: Progress: Import Completed. [2/2/2010 3:45:14 PM]: Finish Time: 2/2/2010 3:45:14 PM. [2/2/2010 3:45:14 PM]: Completed with 0 warnings. [2/2/2010 3:45:14 PM]: Completed with 1 errors. [2/2/2010 3:56:17 PM]: Start Time: 2/2/2010 3:56:17 PM. [2/2/2010 3:56:17 PM]: Progress: Initializing Import. [2/2/2010 3:56:34 PM]: FatalError: Could not find WebTemplate #75805 with LCID 1033. at Microsoft.SharePoint.Deployment.ImportRequirementsManager.VerifyWebTemplate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.Validate(SPRequirementObject reqObj) at Microsoft.SharePoint.Deployment.ImportRequirementsManager.DeserializeAndValidate() at Microsoft.SharePoint.Deployment.SPImport.VerifyRequirements() at Microsoft.SharePoint.Deployment.SPImport.Run() [2/2/2010 3:56:39 PM]: Progress: Import Completed. [2/2/2010 3:56:39 PM]: Finish Time: 2/2/2010 3:56:39 PM. [2/2/2010 3:56:39 PM]: Completed with 0 warnings. [2/2/2010 3:56:39 PM]: Completed with 1 errors. I actually couldn't find a good reference on how to use it. But, this software doesn't something I'm looking for which can create a simple deployment package (after that you don't need to do anything). I might not be correct but after two days of googling I think there is no such utility (freeware) that can create a simple package of a site and install on other farm without even need to configure anything before you run the installation package. You people might have an advise which can help me to look/think outside the box and get to the solution quickly instead adding more days working on the problem. Please, share only freewares. I can't afford to buy anything. I'm waiting to be surprised with a good share :) Have a good day! Thanks.

    Read the article

  • How to implement an offline reader writer lock

    - by Peter Morris
    Some context for the question All objects in this question are persistent. All requests will be from a Silverlight client talking to an app server via a binary protocol (Hessian) and not WCF. Each user will have a session key (not an ASP.NET session) which will be a string, integer, or GUID (undecided so far). Some objects might take a long time to edit (30 or more minutes) so we have decided to use pessimistic offline locking. Pessimistic because having to reconcile conflicts would be far too annoying for users, offline because the client is not permanently connected to the server. Rather than storing session/object locking information in the object itself I have decided that any aggregate root that may have its instances locked should implement an interface ILockable public interface ILockable { Guid LockID { get; } } This LockID will be the identity of a "Lock" object which holds the information of which session is locking it. Now, if this were simple pessimistic locking I'd be able to achieve this very simply (using an incrementing version number on Lock to identify update conflicts), but what I actually need is ReaderWriter pessimistic offline locking. The reason is that some parts of the application will perform actions that read these complex structures. These include things like Reading a single structure to clone it. Reading multiple structures in order to create a binary file to "publish" the data to an external source. Read locks will be held for a very short period of time, typically less than a second, although in some circumstances they could be held for about 5 seconds at a guess. Write locks will mostly be held for a long time as they are mostly held by humans. There is a high probability of two users trying to edit the same aggregate at the same time, and a high probability of many users needing to temporarily read-lock at the same time too. I'm looking for suggestions as to how I might implement this. One additional point to make is that if I want to place a write lock and there are some read locks, I would like to "queue" the write lock so that no new read locks are placed. If the read locks are removed withing X seconds then the write lock is obtained, if not then the write lock backs off; no new read-locks would be placed while a write lock is queued. So far I have this idea The Lock object will have a version number (int) so I can detect multi-update conflicts, reload, try again. It will have a string[] for read locks A string to hold the session ID that has a write lock A string to hold the queued write lock Possibly a recursion counter to allow the same session to lock multiple times (for both read and write locks), but not sure about this yet. Rules: Can't place a read lock if there is a write lock or queued write lock. Can't place a write lock if there is a write lock or queued write lock. If there are no locks at all then a write lock may be placed. If there are read locks then a write lock will be queued instead of a full write lock placed. (If after X time the read locks are not gone the lock backs off, otherwise it is upgraded). Can't queue a write lock for a session that has a read lock. Can anyone see any problems? Suggest alternatives? Anything? I'd appreciate feedback before deciding on what approach to take.

    Read the article

  • Domain Validation in a CQRS architecture

    - by Jupaol
    Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion The first approach I considered was: class Customer : EntityBase<Customer> { public void ChangeEmail(string email) { if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”); if(!email.IsEmail()) throw new DomainException(); if(email.Contains(“@mailinator.com”)) throw new DomainException(); } } I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer> { private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver; public bool IsSatisfiedBy(Customer customer) { return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email); } } But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture: class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand> { public void IsValid(Customer entity, ChangeEmailCommand command) { if(!command.Email.HasValidDomain()) throw new DomainException(“...”); } } Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it. Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components... Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options: Pass the validators to each method of my entity Validate my objects externally (from the command handler) I am not happy with the option 1 so I would explain how I would do it with the option 2 class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand> { public void Execute(ChangeEmailCommand command) { private IEnumerable<IDomainInvariantValidator> validators; // here I would get the validators required for this command injected, and in here I would validate them, something like this using (var t = this.unitOfWork.BeginTransaction()) { var customer = this.unitOfWork.Get<Customer>(command.CustomerId); this.validators.ForEach(x =. x.IsValid(customer, command)); // here I know the command is valid // the call to ChangeEmail will fire domain events as needed customer.ChangeEmail(command.Email); t.Commit(); } } } Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation EDIT I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement

    Read the article

  • Linq2SQL vs NHibernate performance (have I gone mad?)

    - by HeavyWave
    I have written the following tests to compare performance of Linq2SQL and NHibernate and I find results to be somewhat strange. Mappings are straight forward and identical for both. Both are running against a live DB. Although I'm not deleting Campaigns in case of Linq, but that shouldn't affect performance by more than 10 ms. Linq: [Test] public void Test1000ReadsWritesToAgentStateLinqPrecompiled() { Stopwatch sw = new Stopwatch(); Stopwatch swIn = new Stopwatch(); sw.Start(); for (int i = 0; i < 1000; i++) { swIn.Reset(); swIn.Start(); ReadWriteAndDeleteAgentStateWithLinqPrecompiled(); swIn.Stop(); Console.WriteLine("Run ReadWriteAndDeleteAgentState: " + swIn.ElapsedMilliseconds + " ms"); } sw.Stop(); Console.WriteLine("Total Time: " + sw.ElapsedMilliseconds + " ms"); Console.WriteLine("Average time to execute queries: " + sw.ElapsedMilliseconds / 1000 + " ms"); } private static readonly Func<AgentDesktop3DataContext, int, EntityModel.CampaignDetail> GetCampaignById = CompiledQuery.Compile<AgentDesktop3DataContext, int, EntityModel.CampaignDetail>( (ctx, sessionId) => (from cd in ctx.CampaignDetails join a in ctx.AgentCampaigns on cd.CampaignDetailId equals a.CampaignDetailId where a.AgentStateId == sessionId select cd).FirstOrDefault()); private void ReadWriteAndDeleteAgentStateWithLinqPrecompiled() { int id = 0; using (var ctx = new AgentDesktop3DataContext()) { EntityModel.AgentState agentState = new EntityModel.AgentState(); var campaign = new EntityModel.CampaignDetail { CampaignName = "Test" }; var campaignDisposition = new EntityModel.CampaignDisposition { Code = "123" }; campaignDisposition.Description = "abc"; campaign.CampaignDispositions.Add(campaignDisposition); agentState.CallState = 3; campaign.AgentCampaigns.Add(new AgentCampaign { AgentState = agentState }); ctx.CampaignDetails.InsertOnSubmit(campaign); ctx.AgentStates.InsertOnSubmit(agentState); ctx.SubmitChanges(); id = agentState.AgentStateId; } using (var ctx = new AgentDesktop3DataContext()) { var dbAgentState = ctx.GetAgentStateById(id); Assert.IsNotNull(dbAgentState); Assert.AreEqual(dbAgentState.CallState, 3); var campaignDetails = GetCampaignById(ctx, id); Assert.AreEqual(campaignDetails.CampaignDispositions[0].Description, "abc"); } using (var ctx = new AgentDesktop3DataContext()) { ctx.DeleteSessionById(id); } } NHibernate (the loop is the same): private void ReadWriteAndDeleteAgentState() { var id = WriteAgentState().Id; StartNewTransaction(); var dbAgentState = agentStateRepository.Get(id); Assert.IsNotNull(dbAgentState); Assert.AreEqual(dbAgentState.CallState, 3); Assert.AreEqual(dbAgentState.Campaigns[0].Dispositions[0].Description, "abc"); var campaignId = dbAgentState.Campaigns[0].Id; agentStateRepository.Delete(dbAgentState); NHibernateSession.Current.Transaction.Commit(); Cleanup(campaignId); NHibernateSession.Current.BeginTransaction(); } Results: NHibernate: Total Time: 9469 ms Average time to execute 13 queries: 9 ms Linq: Total Time: 127200 ms Average time to execute 13 queries: 127 ms Linq lost by 13.5 times! Event with precompiled queries (both read queries are precompiled). This can't be right, although I expected NHibernate to be faster, this is just too big of a difference, considering mappings are identical and NHibernate actually executes more queries against the DB.

    Read the article

  • finishing some functions in this code

    - by osabri
    i have problem to finish some functions in this code program lab4; var inFile : text; var pArray : array[1..10]of real; //array of 10 integer values containing patterns to search in a given set of numbers var rArray : array[1..10]of integer; //array containing result of pattern search, each index in rArray coresponds to number of occurences of //pattern from pArray in sArray. var sArray : array[1..100] of real; //array containing data read from file var accuracy : real; (****************************************************************************) function errMsg:integer; begin if ParamCount < 3 then begin writeln('Too few arguments'); writeln('Usage: ./lab4 lab4.txt <accuracy> <pattern_1> <pattern_2> <pattern_n>'); errMsg:=-1; end else if ParamCount > 12 then begin writeln('Too many arguments'); writeln('Maximum number of patterns is 10'); errMsg:=-1; end else begin assign(inFile,ParamStr(1)); {$I-} reset(inFile); if ioresult<>0 then begin writeln('Cannot open ',ParamStr(1)); errMsg:=-1; end else errMsg:=0; end; end; (****************************************************************************) procedure readPattern; var errPos,idx:integer; begin if errMsg=0 then begin for idx:=1 to ParamCount-2 do begin Val(ParamStr(idx+2),pArray[idx],errPos); writeln('pArray:',pArray[idx]:2:2); end; end; end; (****************************************************************************) procedure getAccuracy; //Function should get the accuracy as the first param (after the program name) begin (here where i stopped : (( ) end; (****************************************************************************) function readSet:integer; //Function returns number of elements read from file var idx,errPos:integer; var sChar: string; begin if errMsg=0 then begin idx:=1; repeat begin readln(inFile,sChar); VAL(sChar,sArray[idx],errPos); writeln('sArray:',sArray[idx]:2:2); idx:=idx+1; end until eof(inFile)=TRUE; end; readSet:=idx-1; end; (****************************************************************************) procedure searchPattern(sNumber:integer); //Function should search and count pattern(patterns) occurence in a given set what the best solution for this part?? (****************************************************************************) procedure dispResult; //Function should display the result of pattern(patterns) search begin (****************************************************************************) begin readPattern; getAccuracy; searchPattern(readSet); dispResult; end.

    Read the article

  • I'm looking for an online ASP.NET tutor.

    - by pkiyan
    $15/hr. I know it's not much but... Hi. I'm looking for an ASP.NET tutor. I want to use a remote desktop application so we can see each others screens and use Skype or phone to communicate with. You won't need to come up with any lessons or anything like that. I was thinking we could spend an hour or two each time we logged in to build a decent sized website from scratch. That's basically it. I'm a beginner with about 2 months experience with ASP.NET so we won't have to start from the very beginning, but pretty close. I wanted this site to have a little complexity to it and not just a website for beginners, but something I could study for a while. I'll pay you through PayPal or some other method if you prefer. By the way, it doesn't have to be a website that we work on together. I'll listen to other suggestions too. Maybe we could use an open source site/app to walk-through and study and modify. I've looked at 'My Web Pages Starter Kit 1.30', 'SubText 2.1.2', 'nopCommerce 1.5', and some others. They were all beyond me, and I couldn't make sense of any of the source code. But if you use and are really familiar with an open source app/site that I can download, we could study that. Here are some technical specs about the site I'd like to build/study: ASP.NET 2.0+ (preferably 3.5+, but I don't really care) C# / VB.NET ( don't really care, I suck at both. This is more about ASP.NET and helping me understand the structure of an ASP.NET website and the .NET framework in general. ) SQL Server ( I have SQL Server 2008 express and would someday like to learn how to use this thing. ) JavaScript / AJAX ( at least some use of this ) XML ( basically, I'd like to spend some time in the web.config file, and have some sense of what's going on in there. ) ASP.NET Folders ( I'd like to work with all of the ASP.NET folders if possible: App_Code, App_GlobalResources, etc.. and understand what does/doesn't go in them. Hopefully we can build more than one theme too. ) Assemblies ( how do you create a .dll and use it across different websites? maybe you could suggest a third party .dll that we could use ) Web Service ( I read about this once but didn't really get it ) I can't think of anything else but the above will definitely keep me busy. Hopefully we could make use of a lot of the server controls too (the nav controls gave me a headache when I tried customizing them). Is someone willing to help? I'll pay through PayPal 15 bucks an hour. I live in the Dallas, Texas (US) area so we'd have to synchronize time zones and agree on a day(s)/time of the week. I prefer working at night and on the weekends because I work during the week but whatever your schedule allows too. If you'd like to help me, can you post: years of experience with ASP.NET, your Time zone and time you're available and any ideas you might have about how you'd like to tutor? THANK YOU.

    Read the article

  • Perl: Deleting multiple re-occuring lines where a certain criteria is met

    - by george-lule
    Dear all, I have data that looks like below, the actual file is thousands of lines long. Event_time Cease_time Object_of_reference -------------------------- -------------------------- ---------------------------------------------------------------------------------- Apr 5 2010 5:54PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=LUGALAMBO_900 Apr 5 2010 5:55PM Apr 5 2010 6:43PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=LUGALAMBO_900 Apr 5 2010 5:58PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 5:58PM Apr 5 2010 6:01PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:01PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:03PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=KAPKWAI_900 Apr 5 2010 6:03PM Apr 5 2010 6:04PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=KAPKWAI_900 Apr 5 2010 6:04PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSJN1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=KAPKWAI_900 Apr 5 2010 6:03PM Apr 5 2010 6:03PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:03PM NULL SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA Apr 5 2010 6:03PM Apr 5 2010 7:01PM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=BSCC1,BssFunction= BSS_ManagedFunction,BtsSiteMgr=BULAGA As you can see, each file has a header which describes what the various fields stand for(event start time, event cease time, affected element). The header is followed by a number of dashes. My issue is that, in the data, you see a number of entries where the cease time is NULL i.e event is still active. All such entries must go i.e for each element where the alarm cease time is NULL, the start time, the cease time(in this case NULL) and the actual element must be deleted from the file. In the remaining data, all the text starting from word SubNetwork upto BtsSiteMgr= must also go. Along with the headers and the dashes. Final output should look like below: Apr 5 2010 5:55PM Apr 5 2010 6:43PM LUGALAMBO_900 Apr 5 2010 5:58PM Apr 5 2010 6:01PM BULAGA Apr 5 2010 6:03PM Apr 5 2010 6:04PM KAPKWAI_900 Apr 5 2010 6:03PM Apr 5 2010 6:03PM BULAGA Apr 5 2010 6:03PM Apr 5 2010 7:01PM BULAGA Below is a Perl script that I have written. It has taken care of the headers, the dashes, the NULL entries but I have failed to delete the lines following the NULL entries so as to produce the above output. #!/usr/bin/perl use strict; use warnings; $^I=".bak" #Backup the file before messing it up. open (DATAIN,"<george_perl.txt")|| die("can't open datafile: $!"); # Read in the data open (DATAOUT,">gen_results.txt")|| die("can't open datafile: $!"); #Prepare for the writing while (<DATAIN>) { s/Event_time//g; s/Cease_time//g; s/Object_of_reference//g; s/\-//g; #Preceding 4 statements are for cleaning out the headers my $theline=$_; if ($theline =~ /NULL/){ next; next if $theline =~ /SubN/; } else{ print DATAOUT $theline; } } close DATAIN; close DATAOUT; Kindly help point out any modifications I need to make on the script to make it produce the necessary output. Will be very glad for your help Kind regards George.

    Read the article

  • Optimizing Python code with many attribute and dictionary lookups

    - by gotgenes
    I have written a program in Python which spends a large amount of time looking up attributes of objects and values from dictionary keys. I would like to know if there's any way I can optimize these lookup times, potentially with a C extension, to reduce the time of execution, or if I need to simply re-implement the program in a compiled language. The program implements some algorithms using a graph. It runs prohibitively slowly on our data sets, so I profiled the code with cProfile using a reduced data set that could actually complete. The vast majority of the time is being burned in one function, and specifically in two statements, generator expressions, within the function: The generator expression at line 202 is neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) and the generator expression at line 204 is neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) The source code for this function of context provided below. selected_nodes is a set of nodes in the interaction_graph, which is a NetworkX Graph instance. node_neighbors is an iterator from Graph.neighbors_iter(). Graph itself uses dictionaries for storing nodes and edges. Its Graph.node attribute is a dictionary which stores nodes and their attributes (e.g., 'weight') in dictionaries belonging to each node. Each of these lookups should be amortized constant time (i.e., O(1)), however, I am still paying a large penalty for the lookups. Is there some way which I can speed up these lookups (e.g., by writing parts of this as a C extension), or do I need to move the program to a compiled language? Below is the full source code for the function that provides the context; the vast majority of execution time is spent within this function. def calculate_node_z_prime( node, interaction_graph, selected_nodes ): """Calculates a z'-score for a given node. The z'-score is based on the z-scores (weights) of the neighbors of the given node, and proportional to the z-score (weight) of the given node. Specifically, we find the maximum z-score of all neighbors of the given node that are also members of the given set of selected nodes, multiply this z-score by the z-score of the given node, and return this value as the z'-score for the given node. If the given node has no neighbors in the interaction graph, the z'-score is defined as zero. Returns the z'-score as zero or a positive floating point value. :Parameters: - `node`: the node for which to compute the z-prime score - `interaction_graph`: graph containing the gene-gene or gene product-gene product interactions - `selected_nodes`: a `set` of nodes fitting some criterion of interest (e.g., annotated with a term of interest) """ node_neighbors = interaction_graph.neighbors_iter(node) neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) try: max_z_score = max(neighbor_z_scores) # max() throws a ValueError if its argument has no elements; in this # case, we need to set the max_z_score to zero except ValueError, e: # Check to make certain max() raised this error if 'max()' in e.args[0]: max_z_score = 0 else: raise e z_prime = interaction_graph.node[node]['weight'] * max_z_score return z_prime Here are the top couple of calls according to cProfiler, sorted by time. ncalls tottime percall cumtime percall filename:lineno(function) 156067701 352.313 0.000 642.072 0.000 bpln_contextual.py:204(<genexpr>) 156067701 289.759 0.000 289.759 0.000 bpln_contextual.py:202(<genexpr>) 13963893 174.047 0.000 816.119 0.000 {max} 13963885 69.804 0.000 936.754 0.000 bpln_contextual.py:171(calculate_node_z_prime) 7116883 61.982 0.000 61.982 0.000 {method 'update' of 'set' objects}

    Read the article

  • Php template caching design

    - by Thomas
    Hello to all, I want to include caching in my app design. Caching templates for starters. The design I have used so far is very modular. I have created an ORM implementation for all my tables and each table is represented by the corresponding class. All the requests are handled by one controller which routes them to the appropriate webmethod functions. I am using a template class for handling UI parts. What I have in mind for caching includes the implementation of a separate Cache class for handling caching with the flexibility to either store in files, apc or memcache. Right now I am testing with file caching. Some thoughts Should I include the logic of checking for cached versions in the Template class or in the webmethods which handle the incoming requests and which eventually call the Template class. In the first case, things are pretty simple as I will not have to change anything more than pass the template class an extra argument (whether to load from cache or not). In the second case however, I am thinking of checking for a cached version immediately in the webmethod and if found return it. This will save all the processing done until the logic reaches the template (first case senario). Both senarios however, rely on an accurate mechanism of invalidating caches, which brings as to Invalidating caches As I see it (and you can add your input freely) a template cached file, becomes invalidate if: a. the expiration set, is reached. b. the template file itself is updated (ie by the developer when adding a new line) c. the webmethod that handles the request changes (ie the developer adds/deletes something in the code) d. content coming from the db and ending in the template file is modified I am thinking of storing a json encoded array inside the cached file. The first value will be the expiration timestamp of the cache. The second value will be the modification time of the php file with the code handling the request (to cope with option c above) The third will be the content itself The validation process I am considering, according to the above senarios, is: a. If the expiration of the cached file (stored in the array) is reached, delete the cache file b. if the cached file's mod time is smaller than the template's skeleton file mod time, delete the cached file c. if the mod time of the php file is greated than the one stored in the cache, delete the cached file. d. This is tricky. In the ORM implementation I ahve added event handlers (which fire when adding, updating, deleting objects). I could delete the cache file every time an object thatprovides content to the template, is modified. The problem is how to keep track which cached files correpond to each schema object. Take this example, a user has his shortprofile page and a full profile page (2 templates) These templates can be cached. Now, every time the user modifies his profile, the event handler would need to know which templates or cached files correspond to the User, so that these files can be deleted. I could store them in the db but I am looking for a beter approach

    Read the article

  • Apache/PHP serving file multiple times

    - by easement
    I have a system with a download.php page. The page takes and id and loads a file based on from the DB Record and then serves it up. I've noticed a couple instances where files are requested multiple times in short time spans (20ms). Times that are too quick for human input. There are plenty of instances where the downloader functions fine. However, in taking a closer look at the downloader’s usage, I did see some interesting behavior. For instance, the IP address xxx.xxx.xxx.xxx (which is one in a range owned by xxxxxx.de in Germany) came to the site through Google. They browsed around and then came to the page http://site.com/xxxx/press+125.php There they issued a request for /download.php?id=/ZZ/n+aH55Y= (a PDF) at 9:04:23AM. That alone is not a big deal. However, what is interesting is that the server seems to have been quite preoccupied with serving that request. In the logs the request first completes between 9:09:48 and 9:10:00. It looks like the user must have gotten tired of waiting during that time and requested the document two more times. Between 09:14:47 and 09:15:00 the same request appears again, except it is from 9:04:43AM, 20ms later than the first request. Then it pops up a third time, with a request that started at 09:05:06 completing between 09:19:55 and 09:19:58! I’m suspicious of that document. In looking through the logs I see other instances where it takes the server a little while to handle that specific file. Check out this list of requests from zzz.zzz.zzz.zzz[different than above] for the file /download.php?id=/ZZ/n+aH55Y= (the same docuemnt as before): Request time Complete Time 04:32:43 04:33:36 04:32:50 04:33:36 04:32:51 04:33:38 04:33:05 04:33:38 04:33:34 04:33:42 04:33:05 04:33:42 So something is definitely going on. Whether it has to do with this specific document tripping up the server, the download.php page’s code, or if we’re just seeing the evidence of some server level overload as it plays out in real time I’m not yet sure. In fairness, there are other instances of people downloading /download.php?id=/ZZ/n+aH55Y= (the same PDF) without error. However, it is interesting that the multiple processes only seem to happen with this one file, and then only when it is accessed through the page http://site.com/press+125.php . It bears further investigation if there’s something amiss inside the code that causes the system to fire off multiple download requests that occupy the server. I don't know if this press+125.php is a rabbit hole, but there is weird consicence. Any ideas? I'm totally out of ideas. Apache maxed out? Things like that. ///DOWNLOAD.php $file = new files(); $file->comparison_filter("id", "=", $id); //sql to load if ($file->load()) { $file->serve(); } //FILES function serve() { if ($this->is_loaded) { if (file_exists($this->get_value("filename"))) { if ($this->get_value("content_type") != "") { header("Content-Type: " . $this->get_value("content_type")); } header("Content-Length: " . filesize($this->get_value("filename"))); if ($this->get_value("flag_image") == 0 || $this->get_value("flag_image") == false) { header("Cache-Control: private"); header("Content-Disposition: attachment; filename=" . urlencode($this->get_value("original_filename"))); } set_time_limit(0); @readfile($this->get_value("filename")); exit; } } }

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >