Search Results

Search found 36111 results on 1445 pages for 'mysql update'.

Page 777/1445 | < Previous Page | 773 774 775 776 777 778 779 780 781 782 783 784  | Next Page >

  • Silverstripe on souceforge project web

    - by theundecided
    how do I install silverstripe on sourceforge for a project. I know I need a synlink...but I don't know how to? I have a htdocs folder that is read only (once on the server) that I can access via sftp it is accsesable via url I have a persistent folder that is rightable (once on the server) that I can access via sftp it is not accsesable via url I have a mysql credentials that are accseped during install but can't be finished because of no right accsess

    Read the article

  • Entity Framework and WCf

    - by Nihilist
    Hi I am little confused on designing WCf services with EF. When using WCf and EF, where do we draw this line on what properties to return and what not to with the entity. Here is my scenario I have User. Here are the relations. User [1 to many] Address, User [ 1 to many] Email, User [ 1 to many] Phone So now on the webform, on page1 I can edit user information. say I can edit few properties on the user entity and can also edit address, phone, email entities[ like add / delete and update any] On page2, i can only update user properties and nothing related to navigation properties [ address, email, phone]. So when I return the User Entity [ OR DTO] should i be returning the navigation properties too? Or should the client make multiple calls to get navigation properites. Also, how does it go with Save? Like should the client make multiple calls to save user and related entites or just one call to save the graph? Lets say, if I just have a Save(User user) [ where user has all the related entities too] both page1 and page2 will call save and pass me the user. but one page1 i will need a lot more information. but on page2 i just need the user primitive properties. So my question is, where do we draw this line, how do we design theses services ? Is the WCF operation designed on the page and the fields it has ? I am hoping i explained my problem well enough.

    Read the article

  • Atomic operations on several transactionless external systems

    - by simendsjo
    Say you have an application connecting 3 different external systems. You need to update something in all 3. In case of a failure, you need to roll back the operations. This is not a hard thing to implement, but say operation 3 fails, and when rolling back, the rollback for operation 1 fails! Now the first external system is in an invalid state... I'm thinking a possible solution is to shut down the application and forcing a manual fix of the external system, but then again... It might already have used this information (and perhaps that's why it failed), or we might not have sufficient access. Or it might not even be a good way to rollback the action! Are there some good ways of handling such cases? EDIT: Some application details.. It's a multi user web application. Most of the work is done with scheduled jobs (through Quartz.Net), so most operations is run in it's own thread. Some user actions should trigger jobs that update several systems though. The external systems are somewhat unstable. I Was thinking of changing the application to use the Command and Unit Of Work pattern

    Read the article

  • How to "push" updates to individual cells in a (ASP.NET) web page table/grid?

    - by dommer
    I'm building something similar to a price comparison site. This will be developed in ASP.NET/WebForms/C#/.NET 3.5. The site will be used by the public, so I have no control over the client side - and the application isn't so central to their lives that they'll go out of their way to make it work. I want to have a table/grid that displays products in rows, vendors in columns, and prices in the cells. Price updates will be arriving (at the server) continuously, and I'd like to "push" any updates to the clients' browsers - ideally only updating what has changed. So, if Vendor A changes their price on Product B, I'd want to immediately update the relevant cell in all the browsers that are viewing this information. I don't want to use any browser plug-ins (e.g. Silverlight). Javascript is fine. What's the best approach to take? Presumably my options are: 1) have the client page continuously poll the server for updates, locate the correct cell and update it; or 2) have the server be able to send updates to all the open browser pages which are listening for these updates. The first one would seem the most plausible, but I don't want to constrain the assembled wisdom of the SO community. I'm happy to purchase any third party components (e.g. a grid) that might help with this. I already have the DevExpress grid/ajax components if they provide anything useful.

    Read the article

  • entity framework insert bug

    - by tmfkmoney
    I found a previous question which seemed related but there's no resolution and it's 5 months old so I've opened my own version. http://stackoverflow.com/questions/1545583/entity-framework-inserting-new-entity-via-objectcontext-does-not-use-existing-e When I insert records into my database with the following it works fine for a while and then eventually it starts inserting null values in the referenced field. This typically happens after I do an update on my model from the database although not always after I do an update. I'm using a MySQL database for this. I have debugged the code and the values are being set properly before the save event. They're just not getting inserted properly. I can always fix this issue by re-creating the model without touching any of my code. I have to recreate the entire model, though. I can't just dump the relevant tables and re-add them. This makes me think it doesn't have anything to do with my code but something with the entity framework. Does anyone else have this problem and/or solved it? using (var db = new MyModel()) { var stocks = from record in query let ticker = record.Ticker select new { company = db.Companies.FirstOrDefault(c => c.ticker == ticker), price = Convert.ToDecimal(record.Price), date_stamp = Convert.ToDateTime(record.DateTime) }; foreach (var stock in stocks) { if (stock.company != null) { var price = new StockPrice { Company = stock.company, price = stock.price, date_stamp = stock.date_stamp }; db.AddToStockPrices(price); } } db.SaveChanges(); }

    Read the article

  • How do the major C# DI/IoC frameworks compare?

    - by Slomojo
    At the risk of stepping into holy war territory, What are the strengths and weaknesses of these popular DI/IoC frameworks, and could one easily be considered the best? ..: Ninject Unity Castle.Windsor Autofac StructureMap Are there any other DI/IoC Frameworks for C# that I haven't listed here? In context of my use case, I'm building a client WPF app, and a WCF/SQL services infrastructure, ease of use (especially in terms of clear and concise syntax), consistent documentation, good community support and performance are all important factors in my choice. Update: The resources and duplicate questions cited appear to be out of date, can someone with knowledge of all these frameworks come forward and provide some real insight? I realise that most opinion on this subject is likely to be biased, but I am hoping that someone has taken the time to study all these frameworks and have at least a generally objective comparison. I am quite willing to make my own investigations if this hasn't been done before, but I assumed this was something at least a few people had done already. Second Update: If you do have experience with more than one DI/IoC container, please rank and summarise the pros and cons of those, thank you. This isn't an exercise in discovering all the obscure little containers that people have made, I'm looking for comparisons between the popular (and active) frameworks.

    Read the article

  • Another IE jQuery AJAX/post problem..

    - by Azzyh
    OK so i have this website, http://www.leinstein.dk. You will see "Hello World!" This is from ok.php, ive made a script that refreshes ok.php after 10 seconds. Anyways, This does not show in IE. I dont know why, and i hope you can help me out. Here's My script: function ajax_update() { cache: false /* var wrapperId = '#wtf'; */ var postFile = 'ok.php'; $.post("ok.php", function(data){ cache: false $("#wtf").html(data); }); setTimeout('ajax_update()', 10000); } And here's index.php: <? header("cache-control: no-cache"); ?> <html> <head> <link href="style.css" type="text/css" rel="stylesheet" /> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js"></script> <script src="ajax_framework.js" language="javascript" charset="UTF-8"></script> </head> <!-- AJAX UPDATE COMMENTS BEGIN --> <body onload="ajax_update();"> <!-- AJAX UPDATE END --> <br> <div id="wtf"></div> </body> </html> Thank you in forward..!

    Read the article

  • Loading Dimension Tables - Methodologies

    - by Nev_Rahd
    Hello, Recently I been working on project, where need to populated Dim Tables from EDW Tables. EDW Tables are of type II which does maintain historical data. When comes to load Dim Table, for which source may be multiple EDW Tables or would be single table with multi level pivoting (on attributes). Mean: There would be 10 records - one for each attribute which need to be pivoted on domain_code to make a single row in Dim. Out of these 10 records there would be some attributes with same domain_code but with different sub_domain_code, which needs further pivoting on subdomain code. Ex: if i got domain code: 01,02, 03 = which are straight pivot on domain code I would also have domain code: 10 with subdomain code / version as 2006,2007,2008,2009 That means I need to split my source table with above attributes into two = one for domain code and other for domain_code + version. so far so good. When it comes to load Dim Table: As per design specs for Dimensions (originally written by third party), what they want is: for every single change in EDW (attribute), it should assemble all the related records (for that NK) mean new one with other attribute values which are current = process them to create a new dim record and insert it. That mean if a single extract contains 100 records updated (one for each NK), it should assemble 100 + (100*9) records to insert / update dim table. How good is this approach. Other way I tried to do is just do a lookup into dim table for that NK get the value's of recent records (attributes which not changed) and insert it and update the current one. What would be the better approach assembling records at source side for one attribute change or looking into dim table's recent record and process it. If this doesn't make sense, would like to elaborate it further. Thanks

    Read the article

  • import csv file/excel into sql database asp.net

    - by kiev
    Hi everyone! I am starting a project with asp.net visual studio 2008 / SQL 2000 (2005 in future) using c#. The tricky part for me is that the existing DB schema changes often and the import files columns will all have to me matched up with the existing db schema since they may not be one to one match on column names. (There is a lookup table that provides the tables schema with column names I will use) I am exploring different ways to approach this, and need some expert advice. Is there any existing controls or frameworks that I can leverage to do any of this? So far I explored FileUpload .NET control, as well as some 3rd party upload controls to accomplish the upload such as SlickUpload but the files uploaded should be < 500mb Next part is reading of my csv /excel and parsing it for display to the user so they can match it with our db schema. I saw CSVReader and others but for excel its more difficult since I will need to support different versions. Essentially The user performing this import will insert and/or update several tables from this import file. There are other more advance requirements like record matching but and preview of the import records, but I wish to get through understanding how to do this first. Update: I ended up using csvReader with LumenWorks.Framework for uploading the csv files.

    Read the article

  • innerJoin query show error

    - by Chithri Ajay
    just i print the two table data so i am using inner join SELECT sd.GameName FROM LottoryTickets AS sd JOIN group AS p ON sd.Group = p.groupname WHERE p.groupname = 11 now i get #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'group AS p ON sd.Group = p.groupname WHERE p.groupname = 11 LIMIT 0, 30' at line 3 this response please guide me thanks for advance.

    Read the article

  • Validating a value for a DataColumn

    - by Richard Neil Ilagan
    Hello! I'm using a DataGrid with edit functionalities in my project. It's handy, compared to having to edit its source data manually, but sadly, that means that I'll have to deal with validating user input a bit more. And my problem is basically just that. When I set my DataGrid to EDIT mode, modify the values and then set it to UPDATE, what is the best way to check if a value that I've entered is, in fact, compatible with the corresponding column's data type? i.e. (simple example) // assuming DataTable dt = new DataTable(); dt.Columns.Add("aa",typeof(System.Int32)); DataGrid dg = new DataGrid(); dg.DataSource = dt; dg.DataBind(); dg.UpdateCommand += dg_Update; // this is the update handler protected void dg_Update(object src, DataGridCommandEventArgs e) { string newValue = (someValueIEnteredInTextBox); // HOW DO I CHECK IF [newValue] IS COMPATIBLE WITH COLUMN "aa" ABOVE? dt.LoadDataRow(newValue, true); } Thanks guys. Any leads would be so much help.

    Read the article

  • explain these select statements!

    - by user329820
    Hi, I can not get the difference betwwn these statements? would you please help me,I have read some sample of select statements but I did not get these ones. SELECT 'B' FROM T WHERE A = (SELECT NULL); SELECT 'C' FROM T WHERE A = ANY (SELECT NULL); SELECT 'D' FROM T WHERE A = A; I use MySQL

    Read the article

  • how to validate the dropdownlist box values on submit

    - by kumar
    Hello friends, I have validate_excpt on Form Before Submit I am doing this.. on the View I have two dropdown listboxes I am using On Submit I need to check.. If I selet ResolutionCode I need to validat ReasonCode Dropdownlist that It should select if not Pelase select ReasonCode I should Dispaly? Do I need to do this on Submit ButtonClick? or Can I do it on Validate_excpt? Can anybody help me out? <label for="ResolutionCode"> Resolution: <span> <%=Html.DropDownListFor(model => model.ResolutionCode, new SelectList(Model.LookupCodes["C_EXCPT_RESL"], "Key", "Value"))%> </span> </label> <label for="ReasonCode"> Reason: <span><%=Html.DropDownListFor(model => model.ReasonCode, new SelectList(Model.LookupCodes["C_EXCPT_RSN"], "Key", "Value"))%></span> </label> function validate_excpt(formData, jqForm, options) { var form = jqForm[0]; return true; } // post-submit callback function showResponse(responseText, statusText, xhr, $form) { if (responseText[0].substring(0, 16) != "System.Exception") { $('#error-msg-ID span:last').html('<strong>Update successful.</strong>'); } else { $('#error-msg-ID span:last').html('<strong>Update failed.</strong> ' + responseText[0].substring(0, 48)); } $('#error-msg-ID').removeClass('hide'); } $('#exc-').ajaxForm({ target: '#error-msg-ID', beforeSubmit: validate_excpt, success: showResponse, dataType: 'json' });

    Read the article

  • Is it possible in Perl to require a subroutine call is made?

    - by MitchelWB
    I don't know enough about Perl to even know what I'm asking for exactly, but I'm writing a series of subroutines to be available for many individual scripts that all process different incoming flat files. The process is far from perfect, but it's what I've got to deal with and I'm trying to build myself a small library of subs that make it easier for me to manage it all. Each script handles a different incoming flat file with it's own formatting, sorting, grouping and outputting requirements. One common aspect is that we have small text files that house counters that are used to name the output files so that we have no duplicate file names. Because the processing of the data is different for each file, I need to open the file to get my counter value, because this is a common operation, I'd like to put it in a sub to retrieve the counter. But then need to write specific code to process the data. And would like a second sub that allows me to update the counter with the counter once I've processed the data. Is there a way to make the second sub call a requirement if the first one is called? Ideally if it could even be an error that would prevent the script from running at all much like a syntax error. EDIT: Here is a little [ugly and simplified] psuedo-code to give a better feel for what the current process is: require "importLibrary.plx"; #open data source file open DataIn, $filename; #call getCounterInfo from importLibrary.plx to get the counter value from counter file $counter = &getCounterInfo($counterFileName); while (<DataIn>) { #Process data based on unique formatting and requirements #output to task files based on requirements and name files using the $counter #increment $counter } #update counter file with new value of $counter &updateCounterInfo($counter);

    Read the article

  • web applications tend to have a admin panel?

    - by ajsie
    i wonder if large web applications like twitter and facebook have admin panels to handle CRUD for users, posts, images, themes and so on just like in CMS like drupal? so programmers have to code the front for the regular users AND back for the administrators? if i develop an web application is it recommended that i also code the admin part? or is it unnecessary since i can handle all directly in mysql and by editing php scripts directly? share your thoughts! thanks

    Read the article

  • large databases in sqlite - file size considerations?

    - by Gj
    I'm using a sqlite db which is very convenient and seems to meet all of my needs at this point. Currently my db size is <50MB, but I now need to add a new table which will store large text blobs, which will cause the db to reach up to 5GB within the next year. Would sqlite be able to deal with a 5GB db size? Any caveats to that, compared with say mysql?

    Read the article

  • How can I limit asp.net control actions based on user role?

    - by Duke
    I have several pages or views in my application which are essentially the same for both authenticated users and anonymous users. I'd like to limit the insert/update/delete actions in formviews and gridviews to authenticated users only, and allow read access for both authed and anon users. I'm using the asp.net configuration system for handling authentication and roles. This system limits access based on path so I've been creating duplicate pages for authed and anon paths. The solution that comes to mind immediately is to check roles in the appropriate event handlers, limiting what possible actions are displayed (insert/update/delete buttons) and also limiting what actions are performed (for users that may know how to perform an action in the absence of a button.) However, this solution doesn't eliminate duplication - I'd be duplicating security code on a series of pages rather than duplicating pages and limiting access based on path; the latter would be significantly less complicated. I could always build some controls that offered role-based configuration, but I don't think I have time for that kind of commitment right now. Is there a relatively easy way to do this (do such controls exist?) or should I just stick to path-based access and duplicate pages? Does it even make sense to use two methods of authorization? There are still some pages which are strictly for either role so I'll be making use of path-based authorization anyway. Finally, would using something other than path-based authorization be contrary to typical asp.net design practices, at least in the context of using the asp.net configuration system?

    Read the article

  • Get the equivalent time between "dynamic" time zones

    - by doctore
    I have a table providers that has three columns (containing more columns but not important in this case): starttime, start time in which you can contact him. endtime, final hour in which you can contact him. region_id, region where the provider resides. In USA: California, Texas, etc. In UK: England, Scotland, etc starttime and endtime are time without timezone columns, but, "indirectly", their value has time zone of the region in which the provider resides. For example: starttime | endtime | region_id (time zone of region) | "real" st | "real" et ----------|----------|---------------------------------|-----------|----------- 03:00:00 | 17:00:00 | 1 (EGT => -1) | 02:00:00 | 16:00:00 Often I need to get the list of suppliers whose time range is within the current server time (taking into account the time zone conversion). The problem is that the time zones aren't "constant", ie, they may change during the summer time. However, this change is very specific to the region and not always carried out at the same time: EGT <= EGST, ART <= ARST, etc. The question is: 1. Is it necessary to use a webservice to update every so often the time zones in the regions? Does anyone know of a web service that can serve? 2. Is there a better approach to solve this problem? Thanks in advance. UPDATE I will give an example to clarify what I'm trying to get. In the table providers I found this records: idproviders | starttime | endtime | region_id ------------|-----------|----------|----------- 1 | 03:00:00 | 17:00:00 | 23 (Texas) 2 | 04:00:00 | 18:00:00 | 23 (Texas) If I execute the query in January, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +1 hour Server time = 02:00:00 I should get the following results: idproviders = 1 If I execute the query in June, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +2 hours (their local time has not changed, but their time zone has changed) Server time = 02:00:00 I should get the following results: idproviders = 1 and 2

    Read the article

  • error while updating a database in ASP.NET

    - by Viredae
    I am having trouble updating an SQL database, the problem is not that it doesn't update at all, but that particular parameters are being updated while the others are not. here is the code for updating the parameters: string EditRequest = "UPDATE Requests SET Description = @Desc, BJustif = @Justif, Priority = @Priority, Requested_System = @Requested, Request_Status = @Stat WHERE"; EditRequest += " Req_ID=@ID"; SqlConnection Submit_conn = new SqlConnection(WebConfigurationManager.ConnectionStrings["DBConn"].ConnectionString); SqlCommand Submit_comm = new SqlCommand(EditRequest, Submit_conn); Submit_comm.Parameters.AddWithValue("@ID", Request.QueryString["reqid"]); Submit_comm.Parameters.AddWithValue("@Desc", DescBox.Text); Submit_comm.Parameters.AddWithValue("@Justif", JustifBox.Text); Submit_comm.Parameters.AddWithValue("@Priority", PriorityList.SelectedValue); Submit_comm.Parameters.AddWithValue("@Requested", RelatedBox.Text); Submit_comm.Parameters.AddWithValue("@Stat", 1); Submit_conn.Open(); Submit_comm.ExecuteNonQuery(); Submit_comm.Dispose(); Submit_comm = null; Submit_conn.Close(); get_Description(); Page.ClientScript.RegisterStartupScript(this.GetType(), "Refresh", "ReloadPage();", true); this function is called by a button on a pop-up form which shows the parameters content that is being changed in a text box which is also used to submit the changes back to the database, but when I press submit, the parameters which are displayed on the form don't change, I can't find any problem wit the code, even though I've compared it to similar code which is working fine. In case you need to, here is one of the text boxes I'm using to display and edit the content: <asp:TextBox ID="JustifBox" TextMode="MultiLine" runat="server" Width="250" Height="50"></asp:TextBox> What exactly is wrong with the code?

    Read the article

  • Rails problem with Delayed_Job and Active Record

    - by Michael Waxman
    I'm using Delayed_Job to grab a Twitter user's data from the API, but it's not saving it in the model for some reason! Please help! (code below) class BandJob < Struct.new(:band_id, :band_username) #parameter def perform require 'json' require 'open-uri' band = Band.find_by_id(band_id) t = JSON.parse(open("http://twitter.com/users/show/#{band_username}.json").read) band.screen_name = t['screen_name'] band.profile_background_image = t['profile_background_image_url'] band.url = 'http://' + band_username + '.com' band.save! end end To clarify, I'm actually not getting any errors, it's just not saving. Here's what my log looks like: * [JOB] acquiring lock on BandJob [4;36;1mDelayed::Job Update (3.1ms)[0m [0;1mUPDATE "delayed_jobs" SET locked_at = '2009-11-09 18:59:45', locked_by = 'host:dhcp128036151228.central.yale.edu pid:2864' WHERE (id = 10442 and (locked_at is null or locked_at < '2009-11-09 14:59:45') and (run_at <= '2009-11-09 18:59:45')) [0m [4;35;1mBand Load (1.5ms)[0m [0mSELECT * FROM "bands" WHERE ("bands"."id" = 34) LIMIT 1[0m [4;36;1mBand Update (0.6ms)[0m [0;1mUPDATE "bands" SET "updated_at" = '2009-11-09 18:59:45', "profile_background_image" = 'http://a3.twimg.com/profile_background_images/38193417/fbtile4.jpg', "url" = 'http://Coldplay.com', "screen_name" = 'coldplay' WHERE "id" = 34[0m [4;35;1mDelayed::Job Destroy (0.5ms)[0m [0mDELETE FROM "delayed_jobs" WHERE "id" = 10442[0m * [JOB] BandJob completed after 0.5448 1 jobs processed at 1.8011 j/s, 0 failed ... Thanks!

    Read the article

< Previous Page | 773 774 775 776 777 778 779 780 781 782 783 784  | Next Page >