Search Results

Search found 10693 results on 428 pages for 'stay updated'.

Page 293/428 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Automatically update audit information on Entity

    - by Nix
    I have an entity model that has audit information on every table (50+ tables) CreateDate CreateUser UpdateDate UpdateUser Currently we are programatically updating audit information. Ex: if(changed){ entity.UpdatedOn = DateTime.Now; entity.UpdatedBy = Environment.UserName; context.SaveChanges(); } But I am looking for a more automated solution. During save changes, if an entity is created/updated I would like to automatically update these fields before sending them to the database for storage. Any suggestion on how i could do this? Let me know if any more information is needed.

    Read the article

  • Get rid of all user data of Android application after bigger update

    - by Johe Green
    Hi, I completely revamped an app. Tested it for a while on my device and emulator. The app worked fine. However when I updated the app through the Android market, my users experienced crashes. Since there is no way to properly debug this procedure I asume the crash is caused by old data which is not being removed from the device (probably from the onsavedstate bundle?!). Is there a way to do a "clean/total" reinstall without having the user to do it manually? Best Regards Johe

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment

    - by user69508
    (Please forgive if this is posted in an incorrect forum. We didn’t know exactly where to post it.) We have an ASP.NET Web API single page application - a browser-based app running in IIS to serve up HTML5/CSS3/JavaScript, which talks to the ASP.NET Web API endpoint only to access a database and transfer JSON data. Everything is working great in our development environment - that is, we have one Visual Studio solution with an ASP.NET Web API project and two class library projects for data access. While development and testing on development boxes, using IIS Express to a localhost:port to run the site and access the Web API, everything is fine. Now we need to move it to a production environment (and we’re having problems - or just not understanding what needs to be done). The production environment is all internal (nothing will be exposed on the public Internet). There are two domains. One domain, the corporate domain, is where all users login normally. The other domain, the process domain, contains the SQL Server instance that our app and Web API will need to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. So, I guess what they want is: corp domain (end users) <– firewall (open port 80) <– DMZ (web server running IIS) <– firewall (open port 80 or 1433????) <– process domain (IIS for Web API and SQL Server) We’re developers and don’t really understand all the networking aspects, so we’re wondering how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code (HTML5/CSS3/JavaScript/images/etc.) is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Or, does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to "http://server/appname/api/getitmes"? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? I’m sure there are other questions, but we’ll start with these. Thanks!!! (Note: the servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.)

    Read the article

  • jQuery success:function{} issue

    - by ufw
    I have several 'select' elements on the page. When I choose some of options, the ajax request is being sent to server and the element adjacent to this 'select' must be updated with response value. I expected the following code to be working: $(".vars").live("change", function() { //selected something in <select> list $.ajax({ type: "POST", url "someurl.php", data: {somedata}, success: function(html) { $this.next().html(html); //this does not update .next() element. } }); }); If I replace $(this).next().html(html); with alert(html); I can see the ajax request was successful. Moreover, it works only if there is only one 'select' on the page, otherwise the empty pop-up appears.

    Read the article

  • Property changing null whilst updating a value.

    - by dinesh
    retreiveing the Data object through a class which is partial class in linq to sql. i am using the same object model to update the changes. but proprertychanging is always null for this object. An anonoymoustype has been converted into strong object type. This strong type is updated again. any one can expalin reason for this? Sample Code: var questions = from Question in _db.CheckListInstanceQuestionDataSource join Question in _db.QuestionDataSource on CheckListInstanceQuestion.UI_QuestionID equals Question.UI_QuestionID where CheckListInstanceQuestion.UI_CheckListInstanceID == instanceID && CheckListInstanceForm.UI_TemplateCheckListFormID == selectedFormID orderby CheckListInstanceQuestion.IN_Order, CheckListInstanceQuestion.VC_Code select new { Question .UI_QuestionID, Question.VC_Description, }; ICollection checkListQuestions = new List(); foreach (var question in questions) { checkListQuestions.Add(new CheckListInstanceQuestion { UI_QuestionID = question.UI_QuestionID, VC_Description = question.VC_Description, });

    Read the article

  • Windows batch file: is there a way to add an "and"?

    - by Beska
    The short version: is there a way to to write an "and" clause in a batch file? The slightly longer version: I've inherited a Visual Studio project that creates a dll and then copies that dll to another location. As a post build step, VS runs the following script. if not '$(ConfigurationName)' == 'DebugNoSvc' goto end xcopy /Y $(TargetDir)*.config $(ProjectDir)..\myService\bin\Debug xcopy /Y $(TargetDir)*.config $(ProjectDir)..\myService\bin\DebugNoSvc :end It looks like there's a problem when the project is compiled as Debug, since it doesn't do the copy (I'm guessing that at some point the middle section got updated, but the if clause didn't.) Is there an easy way to an "and" clause in batch?

    Read the article

  • How to update the progress bar in runtime using c#

    - by karthik
    I am using the below code to update my progress bar. ProgressBar.Visible = true; ProgressBar.Minimum = 1; ProgressBar.Maximum = PortCount; ProgressBar.Value = 1; ProgressBar.Step = 1; int intdata = 5; for (int x = 1; x <= intdata; x++) { ProgressBar.PerformStep(); } MessageBox.Show("Done"); But, it is not getting updated during runtime. Is it because the progress bar is in the same thread. If so, how to update this progress from another thread. Help...

    Read the article

  • guide on crawling the entire web ?

    - by bohohasdhfasdf
    i just had this thought, and was wondering if it's possible to crawl the entire web (just like the big boys!) on a single dedicated server (like Core2Duo, 8gig ram, 750gb disk 100mbps) . I've come across a paper where this was done....but i cannot recall this paper's title. it was like about crawling the entire web on a single dedicated server using some statistical model. Anyways, imagine starting with just around 10,000 seed URLs, and doing exhaustive crawl.... is it possible ? I am in need of crawling the web but limited to a dedicated server. how can i do this, is there an open source solution out there already ? for example see this real time search engine. http://crawlrapidshare.com the results are exteremely good and freshly updated....how are they doing this ?

    Read the article

  • How to store Hierarchical K-Means tree for a large number of images, using Opencv?

    - by AquaAsh
    I am trying to make a program that will find similar images from a dataset of images. The steps are 1)extract SURF descriptors for all images 2)store the descriptors 3)Apply knn on the stored descriptors 4)Match the stored descriptors to the query image descriptor using KNN Now each images SURF descriptor will be stored as Hierarchical k means tree, now do I store each tree as a separate file or is it possible to build some sort of single tree with all the images descriptors and updated as images are added to dataset. This is the paper I am basing the program on www.ijest.info/docs/IJEST10-02-03-13.pdf.

    Read the article

  • How to disable Eclipse's behavior of maximizing editor when double clicking a tab?

    - by Steve Kehlet
    I recently updated my Eclipse (now running 20100218-1602), and I've found whenever I click around quickly between tabs on the tab bar, it will sometimes maximize the editor and hide the PHP Explorer to the left. After researching a little, this appears to be a feature of double clicking a tab. So I guess it's my fault, I'm sure I'm clicking around too fast and mistakenly double clicking a tab, but it happens often enough on what I'd consider a normal editing session that I've come to absolutely loathe it, and even after the usual googling due diligence cannot figure out how to turn it off. From this post (http://stackoverflow.com/questions/594659/eclipses-tab-double-click-on-visual-studio) someone mentions the Window.AutoHideAll shortcut, however that seems to only be for assigning keyboard shortcuts--this is a mouse click thing. But maybe it's a clue. I can't find anything relevant under Eclipse - Preferences - PHP. I don't think it's specific to PHP because if I switch to the Java perspective, double clicking a tab hides the Package Explorer. Any suggestions are appreciated, thanks!

    Read the article

  • Sync between local service with a thread and an activity

    - by Henrik
    Hello all, I'm trying to think of a way on how to sync in between a local service and the main activity. The local service has, A thread with a socket connection that could receive data at any time. A list/array with data. At any time the socket could receive data and add it to the list. The activity needs to display this data. So when the activity starts up it needs to attach or start the local service and fetch the list. It also needs to be notified if the list is updated. I think I would need to sync my list somehow so the local service does not add a new entry to it while the activity fetches the list when connecting to the service. Any ideas? Thanks.

    Read the article

  • Duplicate / Copy records in the same MySQL table

    - by Digits
    Hello, I have been looking for a while now but I can not find an easy solution for my problem. I would like to duplicate a record in a table, but of course, the unique primary key needs to be updated. I have this query: INSERT INTO invoices SELECT * FROM invoices AS iv WHERE iv.ID=XXXXX ON DUPLICATE KEY UPDATE ID = (SELECT MAX(ID)+1 FROM invoices) the proble mis that this just changes the ID of the row instead of copying the row. Does anybody know how to fix this ? Thank you verrry much, Digits //edit: I would like to do this without typing all the field names because the field names can change over time.

    Read the article

  • Finding latest release links on website for C++ Application

    - by Brett Powell
    Basically I have written a game plugin that will allow server admins to update their administration tools from within game rather than having to go download it and install it. The releases are updated regularly, and the beta versions are nightly builds. I am trying to find a way to grab the links from the website, but I cannot think of anyway to do this off of the top of my head. Was hoping someone here might be able to suggest something that would work. http://www.sourcemod.net/snapshots.php Thats the website, basically I am trying to grab the links for the latest stable branch, and latest development branch.

    Read the article

  • Forcing A Postback Asp.Net

    - by Nick LaMarca
    Please take a look at the following click event... Protected Sub btnDownloadEmpl_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnDownloadEmpl.Click Dim emplTable As DataTable = SiteAccess.DownloadEmployee_H() Dim d As String = Format(Date.Now, "d") Dim ad() As String = d.Split("/") Dim fd As String = ad(0) & ad(1) Dim fn As String = "E_" & fd & ".csv" Response.ContentType = "text/csv" Response.AddHeader("Content-Disposition", "attachment; filename=" & fn) CreateCSVFile(emplTable, Response.Output) Response.Flush() Response.End() lblEmpl.Visible = True End Sub This code simply exports data from a datatable to a csv file. The problem here is lblEmpl.Visible=true never gets hit because this code doesnt cause a postback to the server. Even if I put the line of code lblEmpl.Visible=true at the top of the click event the line executes fine, but the page is never updated. How can I fix this?

    Read the article

  • facebook meta tag description not updating

    - by wazzz
    3 days ago I updated description within the meta tag of facebook, but change does not reflect when sharing link on facebook. Instead old description still appears. According to Facebook, it scrapes your page every 24 hours to ensure the description (and other share data) are up to date. However, one can manually refresh it by entering the post URL into the Facebook URL Linter I did manually refresh it as well as now waited for 3 days. When i see debugging output from linter, it shows the correct up-to-date description, but old description still shown when sharing a link. How to reproduce: This is our website: https://www.tradeinsports.se/#tis1 (It's in swedish so bear with me please). If you go to above link and click on any of the two available products, and then share on facebook, you can see the difference in description from the one which appears in linter debugging output. Any help would be appreciated.

    Read the article

  • How can creating the SessionFactory become slow after updating Hibernate?

    - by DR
    In my Java SE application I used Hibernate 3.4 and creating the SessionFactory took about 5 seconds. Today I updated to Hibernate 3.5.1 and suddenly it takes over a minute. What can be the cause of such a dramatic effect? I tried different things the better part of the day and I have no clue... Some data I collected According to the profiler the most time is spent in PersisterFactory.createClassPersister and in that method ProxyFactory.createClass takes the most time. The log shows nothing unusual Changing hibernate.bytecode.use_reflection_optimizer makes no difference

    Read the article

  • Hardware for multipurpose home server

    - by Michael Dmitry Azarkevich
    Hi guys, I'm looking to set up a multipurpose home server and hoped you could help me with the hardware selection. First of all, the services it will provide: Hosting a MySQL database (for training and testing purposes) FTP server Personal Mail Server Home media server So with this in mind I've done some research, and found some viable solutions: A standard PC with the appropriate software (Either second hand or new) A non-solid state mini-ITX system A solid state, fanless mini-ITX system I've also noted the pros and cons of each system: A standard second hand PC with old hardware would be the cheapest option. It could also have lacking processing power, not enough RAM and generally faulty hardware. Also, huge power consumption heat generation and noise levels. A standard new PC would have top-notch hardware and will stay that way for quite some time, so it's a good investment. But again, the main problem is power consumption, heat generation and noise levels. A non-solid state mini-ITX system would have the advantages of lower power consumption, lower cost (as far as I can see) and long lasting hardware. But it will generate noise and heat which will be even worse because of the size. A solid state, fanless mini-ITX system would have all the advantages of a non-solid state mini-ITX but with minimal noise and heat. The main disadvantage is the read\write problems of flash memory. All in all I'm leaning towards a non-solid state mini-ITX because of the read\write issues of flash memory. So, after this overview of what I do know, my questions are: Are all these services even providable from a single server? To my best understanding they are, but then again, I might be wrong. Is any of these solutions viable? If yes, which one is the best for my purposes? If not, what would you suggest? Also, on a more software oriented note: OS wise, I'm planning to run Linux. I'm currently thinking of four options I've been recommended: CentOS, Gentoo, DSL (Damn Small Linux) and LFS (Linux From Scratch). Any thoughts on this? Any other distro you would recomend? Regarding FTP services, I've herd good things about FileZila. Anyone has any experience with that? Do you recommend it? Do you recommend something else? Regarding the Mail service, I know nothing about this except that it exists. Any software you recommend for this task? Home media, same as mail service. Any recommended software? Thank you very much.

    Read the article

  • Is there a centralized list of country names that can be used for web drop down boxes (and validatio

    - by Thr4wn
    There are examples online with web select boxes that have a huge list of countries and that probably will be good enough for me to use. However, by Murphy's law, there's bound to be some random country that someone is from and isn't on my list (and probably someone else also ran into this and has updated their local list). Also, when new countries are added, I won't know about it. Basically, I feel it's better practice and a better smell if there is some centralized list of country names that I can use / trust. (also it could set/follow standards for exact namings "United St..." vs "USA" etc.) I would prefer a solution that isn't IIS specific if possible

    Read the article

  • Pushing Large Files to 500+ Computers [closed]

    - by WMIF
    I work with a team to manage 500-600 rented Windows 7 computers for an annual conference. We have a large amount of data that needs to be synced to these computers, up to 1 TiB. The computers are divided into rooms and connected through unmanaged gigabit switches. We prepare these computers ahead of time with the Windows installation and configuration, plus any files that we have available to us before we send the base image in for replication by the rental company. Every year, we have presenters approach on site with up to gigs of data that need to be pushed to the room that they will be presenting in. Sometimes they only have a few files that are small sizes, such as a slide PDF, but can sometimes be much larger 5 GiB. Our current strategy for pushing these files is using batch scripts and RoboCopy. For the large pushes, we actually use a BitTorrent client to generate a torrent file, and then we use the batch-RoboCopy to push the torrent into a folder on the remote machines that is being monitored by an installed BT client. Often times, this data needs to be pushed immediately with a small time window. We have several machines in a control room that are identical to the machines on the floor that we use for these pushes. We occasionally have a need to execute a program on the remote machines, and we currently use batch and PSexec to handle this task. We would love to be able to respond to these last minute pushes with "sorry, your own fault", but it won't happen. The BT method has allowed us to have a much faster response time, but the whole batch process can get messy when there are multiple jobs being pushed. We use Enterprise Ghost for other processes, and it doesn't work well in this large of scale, plus it is really quite expensive for a once-a-year task like this. EDIT: There is a hard requirement that the remote machines on the floor are running Windows. The control machines do not have a hard OS requirement. I would really like to stay away from Multicast because of complications with upstream routers. Is Multicast or BitTorrent the better way to go on this? Is there another protocol that might work better?

    Read the article

  • Data Usage Checker Tools

    - by Lucifer
    Hey All, I am about to begin a project for a new client, and am worried about a few things concerning data usage on their internet plan. We're in an area where most of the major networks don't cover the area, and the ones that do, have very expensive plans, with very low data allowance per month. I need to develop an app, but part of the problem lies with checking database values every 30 seconds. It's pretty important that this check is happening every 30 seconds, as the database is actually updated all day everyday, approx. every 5seconds (apparently). Each row in the database consists of about a page full of text if you were to paste it into MS Word. So, are there any logical ways of minimizing data usage in my case, and also how am I able to see exactly how much data is used just to establish a connection to the database? Are there any tools for this kind of info? Thanks :)

    Read the article

  • How can I move app.config to a different folder inside the Solution Explorer?

    - by Coder7862396
    I'm using Visual Studio 2010. In my Solution Explorer I like to sort my Project items into folders (a folder for Forms, a folder for Classes, a Misc folder, etc.) It seems though that if I move the "app.config" file to a folder named "Config Files" everything works until I change a setting in the Settings.settings file. Once I do that, a new app.config is created and the one that was in the "Config Files" folder did not get updated. I have searched the entire solution for the text "app.config" and did not find any results. How can I move this file so that my Solution Explorer looks nice and clean?

    Read the article

  • Modifying object in AfterInsert / AfterUpdate

    - by Jean Barmash
    I have a domain object that holds results of a calculation based on parameters that are properties of the same domain object. I'd like to make sure that any time parameters get changed by the user, it recalculates and gets saved properly into the database. I am trying to do that with afterInsert (to make sure calculation is correct in the first place), and afterUpdate. However, since my calculation is trying to modify the object itself, it's not working - throwing various hibernate exceptions. I tried to put the afterUpdate code into a transaction, but that didn't help. I am afraid I am getting into a circular dependency issues here. The exception I am getting right now is: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [esc.scorecard.PropertyScorecard#27] Are the GORM events designed for simpler use cases? I am tempted to conclude that modifying the object you are in the middle of saving is not the way to go.

    Read the article

  • Git workflow idea to push an unfinished local branch to remote for backup purposes

    - by Zubin
    Say I'm currently working on a new feature which I've branched off of the 'dev' branch and I've been working for several days and it's not yet ready to be merged with 'dev' and pushed. Although I have made several commits and have been pulling changes to dev and then merging dev into my feature branch to keep myself updated. Here's my question. Is it a good idea to push my feature branch to a new branch (with the same name as my local branch) onto origin (say GitHub) just for back-up purposes and later on when it's merged into 'dev' and/or 'master' delete it from origin.

    Read the article

  • Does this simple cache class need thread synchronization?

    - by DayOne
    Does this simple cache class need thread synchronization ... if I remove the lock _syncLock statement will encounter any problems? I think i can remove the locks as the references should be updated correctly right? ... BUt i'm think whar happens if client code is iterating over the GetMyDataStructure method and it get replaced? Thanks! public sealed class Cache { private readonly object _syncLock = new object(); private IDictionary<int, MyDataStructure> _cache; public Cache() { Refresh(); } public void Refresh() { lock (_syncLock) { _cache = DAL.GetMyDataStructure(); } } public IDictionary<int, MyDataStructure> **GetMyDataStructure**() { lock (_syncLock) { return _cache; } } }

    Read the article

  • How do I suppress this output?

    - by David
    I have a code chunk in an R Markdown file. ```{r} library(UsingR) ``` Using knitHTML to compile causes the following output, which never happened before I updated to the latest versions of R and RStudio: ## Loading required package: MASS ## Loading required package: HistData ## Loading required package: Hmisc ## Loading required package: grid ## Loading required package: lattice ## Loading required package: survival ## Loading required package: splines ## Loading required package: Formula ## ## Attaching package: 'Hmisc' ## ## The following objects are masked from 'package:base': ## ## format.pval, round.POSIXt, trunc.POSIXt, units ## ## Loading required package: aplpack ## Loading required package: tcltk ## Loading required package: quantreg ## Loading required package: SparseM ## ## Attaching package: 'SparseM' ## ## The following object is masked from 'package:base': ## ## backsolve ## ## ## Attaching package: 'quantreg' ## ## The following object is masked from 'package:Hmisc': ## ## latex ## ## The following object is masked from 'package:survival': ## ## untangle.specials ## ## ## Attaching package: 'UsingR' ## ## The following object is masked from 'package:survival': ## ## cancer How can I suppress this output? Note: echo=FALSE did not work.

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >