Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 749/1300 | < Previous Page | 745 746 747 748 749 750 751 752 753 754 755 756  | Next Page >

  • Passing session between jsf backing bean and model

    - by Rachel
    Background : I am having backing bean which has upload method that listen when file is uploaded. Now I pass this file to parser and in parser am doing validation check for row present in csv file. If validation fails, I have to log information and saving in logging table in database. My end goal : Is to get session information in logging bean so that I can get initialContext and make call to ejb to save data to database. What is happening : In my upload backing bean, am getting session but when i call parser, I do not pass session information as I do not want parser to be dependent on session as I want to unit test parser individually. So in my parser, I do not have session information, from parser am making call to logging bean(just a bean with some ejb methods) but in this logging bean, i need session because i need to get initial context. Question Is there a way in JSF, that I can get the session in my logging bean that I have in my upload backing bean? I tried doing: FacesContext ctx = FacesContext.getCurrentInstance(); HttpSession session = (HttpSession) ctx.getExternalContext().getSession(false); but session value was null, more generic question would be : How can I get session information in model bean or other beans that are referenced from backing beans in which we have session? Do we have generic method in jsf using which we can access session information throughout JSF Application?

    Read the article

  • Enforce link in Team foundation server bug work item for duplicates

    - by Tewr
    We have just started out with Team Foundation Server 2008 / Visual Studio Team System and we are pleased to find how we can export and modify work items to our needs. However, this last thing that would make the setup perfect for us has proved somewhat difficult: We have exported the Bug work item type and have made modifications to it to appear differently to different groups of users. We do, however, see a potential problem in non-developers reporting bugs which turn out to be duplicates. We would like to enforce that users who close a ticket with resolved reason:duplicate also creates a link to the bug which is perceived as the first bug report. I have looked at System.RelatedLinkCount, and put the rule <FIELD type="Integer" name="RelatedLinkCount" refname="System.RelatedLinkCount"> <WHEN field="Microsoft.VSTS.Common.ResolvedReason" value="duplicate"> <PROHIBITEDVALUES> <LISTITEM value="0" /> </PROHIBITEDVALUES> </WHEN> </FIELD> However, when I try to put anything in that scope, the importer tells me that System.RelatedLinkCount does not accept the rule, no matter what I put, but the rule above shows what I am trying to do (even though the most preferable rule would also check that the bug that I link to is not a duplicate as well, though this is overkill :P) Has anyone else tried to enforce rules like this in work items? Is there another approach to solving the same issue? I am thankful for any thoughts on the matter.

    Read the article

  • My treeview Data is Not changing.

    - by Vibin Jith
    Hai , I am trying to display the user permission in a treeview.For each user permissions are stored in the database as xml file. In the page, a Combo box used to list the users and a treeView used to bind the Permission xml. When the user get selected in the combo box , i took the xml from the database and connect with xmlDatasource and bind with the treeView. What happening is , First time the TreeView fill with the xml nodes and another time it will not work. For the first selection it's ok. Anothers selections are not effected by the treeview. The code is debugging. No problem. Can you just tell ,why the treeview datasource is not updating. I used this code .. Dim permissionRoot = From permissionNode In MyUser.UserPermissionXml.Root.Elements("menuNode") XmlTreeViewSource.Data = permissionRoot(0).ToString trvPermission.DataSource = XmlTreeViewSource trvPermission.DataBind() SetPermission(trvPermission.Nodes(0)) The markups <asp:TreeView ID="trvPermission" runat="server" ExpandDepth="2" ShowCheckBoxes="All" ShowLines="True" ForeColor="#005782" > <DataBindings> <asp:TreeNodeBinding DataMember="menuNode" TextField="title" ValueField="value" /> </DataBindings> </asp:TreeView>

    Read the article

  • Use a foreign key mapping to get data from the other table using Python and SQLAlchemy.

    - by Az
    Hmm, the title was harder to formulate than I thought. Basically, I've got these simple classes mapped to tables, using SQLAlchemy. I know they're missing a few items but those aren't essential for highlighting the problem. class Customer(object): def __init__(self, uid, name, email): self.uid = uid self.name = name self.email = email def __repr__(self): return str(self) def __str__(self): return "Cust: %s, Name: %s (Email: %s)" %(self.uid, self.name, self.email) The above is basically a simple customer with an id, name and an email address. class Order(object): def __init__(self, item_id, item_name, customer): self.item_id = item_id self.item_name = item_name self.customer = None def __repr__(self): return str(self) def __str__(self): return "Item ID %s: %s, has been ordered by customer no. %s" %(self.item_id, self.item_name, self.customer) This is the Orders class that just holds the order information: an id, a name and a reference to a customer. It's initialised to None to indicate that this item doesn't have a customer yet. The code's job will assign the item a customer. The following code maps these classes to respective database tables. # SQLAlchemy database transmutation engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() customers_table = Table('customers', metadata, Column('uid', Integer, primary_key=True), Column('name', String), Column('email', String) ) orders_table = Table('orders', metadata, Column('item_id', Integer, primary_key=True), Column('item_name', String), Column('customer', Integer, ForeignKey('customers.uid')) ) metadata.create_all(engine) mapper(Customer, customers_table) mapper(Orders, orders_table) Now if I do something like: for order in session.query(Order): print order I can get a list of orders in this form: Item ID 1001: MX4000 Laser Mouse, has been ordered by customer no. 12 What I want to do is find out customer 12's name and email address (which is why I used the ForeignKey into the Customer table). How would I go about it?

    Read the article

  • Looking for Simplified Overview of EJB3

    - by sdoca
    Hi I'm looking for a simplified overview of EJB3 components. I seem to understand most of the pieces of the puzzle, but can't quite get them to fit together in my brain as a full picture. I've developed numerous web applications (wars) that have been deployed on Tomcat before, but not a full-fledged EE application (ear). I would like the overview to be as generic as possible. I'm not looking for a tutorial on how to set up EJB3 on Glassfish built in NetBeans or some other vendor specific tutorial that's more about the IDE than the technology. I keep reading about Java, ejb-jar, web and ear modules but am not clear on what these different modules contain and how to use them to put together my app. In my case, I want to write a simple database CRUD web application. The first step is simple; create entity classes that model the database tables my app will be using. I plan on using annotations. Should I create a jar that contains just these enity classes? Is this the ejb-jar module (sometimes referred to as the Java module)? Next, I'll need some business logic classes that make use of the entity classes. These are the session beans (stateless or stateful) correct? Should these be packaged in the same jar as the entity classes or a separate jar? Finally, I'll need some sort of web interface (I'll be creating a JSF portlet) application that makes use of the both the session and entity beans. Together with the above jar(s), this will be my war? Assuming the above to be correct, what is involved in creating an ear? Forgive me if this post is vague, but I'm having a hard time defining what it is I don't understand. Thanks for any help!

    Read the article

  • Scalability 101: How can I design a scalable web application using PHP?

    - by Legend
    I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it? Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components: There is a load balancer that first receives requests and then decides where to route each request This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache A blackbox that handles database replication Specifically, I am trying to do the following: Setting up a load balancer (my homework revealed that HAProxy is one such load balancer) Setting up replication so that databases can be synchronized Using memcached Configuring Apache to work with multiple web servers Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage) Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally. I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?

    Read the article

  • [php] Firefox refreshes page 1, even after redirection to page 2

    - by Znarkus
    Hi! This is a quite weird and annoying problem, which is reproduced with the script below. Say we have two pages: script.php and script.php?second. Page 1 creates some database entries and redirects to page 2. On page 2, the user is presented with an editor for said entries. If page 1 for some reason crashes on the first try, and prints some error message, a strange thing will happend. If we refresh page 1 (and this time it redirects fine), every consecutive refresh (of page 2) will actually refresh page 1 and again redirect to page 2. In the above example this would create new database entries for every refresh, which is the problem I want to circumvent by redirecting to page 2. <?php header('Content-type: text/plain'); session_start(); if (!isset($_GET['second'])) { $_SESSION['counter'] = isset($_SESSION['counter']) ? $_SESSION['counter'] + 1 : 1; /*$_SESSION['counter'] = 0; exit('asd');*/ header("Location: {$_SERVER['PHP_SELF']}?second", true, 303); exit; } echo "Counter: {$_SESSION['counter']}"; To try the above complete script, first run it with the commented code intact, then by enabling the commented code. I've tried 301, 302 and 303 redirections. Does someone know why this is happening?

    Read the article

  • SQL Server 2008 Stored Proc suddenly returns -1

    - by aaginor
    I use the following stored procedure from my SQL Server 2008 database to return a value to my C#-Program ALTER PROCEDURE [dbo].[getArticleBelongsToCatsCount] @id int AS BEGIN SET NOCOUNT ON; DECLARE @result int; set @result = (SELECT COUNT(*) FROM art_in_cat WHERE child_id = @id); return @result; END I use a SQLCommand-Object to call this Stored Procedure public int ExecuteNonQuery() { try { return _command.ExecuteNonQuery(); } catch (Exception e) { Logger.instance.ErrorRoutine(e, "Text: " + _command.CommandText); return -1; } } Till recently, everything works fine. All of a sudden, the stored procedure returned -1. At first, I suspected, that the ExecuteNonQuery-Command would have caused and Exception, but when stepping through the function, it shows that no Exception is thrown and the return value comes directly from return _command.ExecuteNonQuery(); I checked following parameters and they were as expected: - Connection object was set to the correct database with correct access values - the parameter for the SP was there and contained the right type, direction and value Then I checked the SP via SQLManager, I used the same value for the parameter like the one for which my C# brings -1 as result (btw. I checked some more parameter values in my C' program and they ALL returned -1) but in the manager, the SP returns the correct value. It looks like the call from my C# prog is somehow bugged, but as I don't get any error (it's just the -1 from the SP), I have no idea, where to look for a solution.

    Read the article

  • C# Interface Method calls from a controller

    - by ArjaaAine
    I was just working on some application architecture and this may sound like a stupid question but please explain to me how the following works: Interface: public interface IMatterDAL { IEnumerable<Matter> GetMattersByCode(string input); IEnumerable<Matter> GetMattersBySearch(string input); } Class: public class MatterDAL : IMatterDAL { private readonly Database _db; public MatterDAL(Database db) { _db = db; LoadAll(); //Private Method } public virtual IEnumerable<Matter> GetMattersBySearch(string input) { //CODE return result; } public virtual IEnumerable<Matter> GetMattersByCode(string input) { //CODE return results; } Controller: public class MatterController : ApiController { private readonly IMatterDAL _publishedData; public MatterController(IMatterDAL publishedData) { _publishedData = publishedData; } [ValidateInput(false)] public JsonResult SearchByCode(string id) { var searchText = id; //better name for this var results = _publishedData.GetMattersBySearch(searchText).Select( matter => new { MatterCode = matter.Code, MatterName = matter.Name, matter.ClientCode, matter.ClientName }); return Json(results); } This works, when I call my controller method from jquery and step into it, the call to the _publishedData method, goes into the class MatterDAL. I want to know how does my controller know to go to the MatterDAL implementation of the Interface IMatterDAL. What if I have another class called MatterDAL2 which is based on the interface. How will my controller know then to call the right method? I am sorry if this is a stupid question, this is baffling me.

    Read the article

  • Using block around a static/singleton resource reference

    - by byte
    This is interesting (to me anyway), and I'd like to see if anyone has a good answer and explanation for this behavior. Say you have a singleton database object (or static database object), and you have it stored in a class Foo. public class Foo { public static SqlConnection DBConn = new SqlConnection(ConfigurationManager.ConnectionStrings["BAR"].ConnectionString); } Then, lets say that you are cognizant of the usefulness of calling and disposing your connection (pretend for this example that its a one-time use for purposes of illustration). So you decide to use a 'using' block to take care of the Dispose() call. using (SqlConnection conn = Foo.DBConn) { conn.Open(); using (SqlCommand cmd = new SqlCommand()) { cmd.Connection = conn; cmd.CommandType = System.Data.CommandType.StoredProcedure; cmd.CommandText = "SP_YOUR_PROC"; cmd.ExecuteNonQuery(); } conn.Close(); } This fails, with an error stating that the "ConnectionString property is not initialized". It's not an issue with pulling the connection string from the app.config/web.config. When you investigate in a debug session you see that Foo.DBConn is not null, but contains empty properties. Why is this?

    Read the article

  • Update PageTitle on Timer.Tick

    - by sohum
    I've got a page with a Timer that is being used as a trigger on an UpdatePanel. The page also contains a TabContainer and several TabPanels. Look at this question for more information. Basically, I've got an UpdatePanel as the element in each TabPanel's ContentTemplate, and the UpdatePanel is triggered by the Timer. My page displays data by reading a database on each tick. I've got the following code running on each Timer.Tick in my codebehind: protected void timeRefresher_Tick(object sender, EventArgs e) { UpdateLivePageTitle(); } The UpdateLivePageTitle() function reads the new information from the database and sets Page.Title accordingly. However, this information is of course not sent to the browser because there is no full page postback--only an async postback to the update panels. As a result, my page title is not being updated until the whole page is being posted back, which destroys the purpose of using UpdatePanels in the first place. I figure there would be a way to do this by using the document.title JS element and call that from within UpdateLivePageTitle(). But as of now, I haven't been able to figure out how to do this. I tried using the following in my UpdateLivePageTitle() function: string updatePageTitleScript = String.Format("document.title = '{0}'", newPageTitle); ToolkitScriptManager.RegisterStartupScript(this.Page, this.Page.GetType(), "UpdatePageTitle", updatePageTitleScript, true); But the result of this was that my TabContainer stopped rendering. I'm also not sure that would work with the async partial page postbacks, either. Any ideas? Thanks!

    Read the article

  • Advice: Python Framework Server/Worker Queue management (not Website)

    - by Muppet Geoff
    I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).

    Read the article

  • @dynamic property needs setter with multiple behaviors

    - by ambertch
    I have a class that contains multiple user objects and as such has an array of them as an instance variable: NSMutableArray *users; The tricky part is setting it. I am deserializing these objects from a server via Objective Resource, and for backend reasons users can only be returned as a long string of UIDs - what I have locally is a separate dictionary of users keyed to UIDs. Given the string uidString of comma separated UIDs I override the default setter and populate the actual user objects: @dynamic users; - (void)setUsers:(id)uidString { users = [NSMutableArray arrayWithArray: [[User allUsersDictionary] objectsForKeys:[(NSString*)uidString componentsSeparatedByString:@","]]]; } The problem is this: I now serialize these to database using SQLitePO, which stores these as the array of user objects, not the original string. So when I retrieve it from database the setter mistakenly treats this array of user objects as a string! Where I actually want to adjust the setter's behavior when it gets this object from DB vs. over the network. I can't just make the getter serialize back into a string without tearing up large code that reference this array of user objects, and I tried to detect in the setter whether I have a string or an array coming in: if ([uidString respondsToSelector:@selector(addObject)]) { // Already an array, so don't do anything - just assign users = uidString but no success... so I'm kind of stuck - any suggestions? Thanks in advance!

    Read the article

  • GridView ObjectDataSource LINQ Paging and Sorting using multiple table query.

    - by user367426
    I am trying to create a pageing and sorting object data source that before execution returns all results, then sorts on these results before filtering and then using the take and skip methods with the aim of retrieving just a subset of results from the database (saving on database traffic). this is based on the following article: http://www.singingeels.com/Blogs/Nullable/2008/03/26/Dynamic_LINQ_OrderBy_using_String_Names.aspx Now I have managed to get this working even creating lambda expressions to reflect the sort expression returned from the grid even finding out the data type to sort for DateTime and Decimal. public static string GetReturnType<TInput>(string value) { var param = Expression.Parameter(typeof(TInput), "o"); Expression a = Expression.Property(param, "DisplayPriceType"); Expression b = Expression.Property(a, "Name"); Expression converted = Expression.Convert(Expression.Property(param, value), typeof(object)); Expression<Func<TInput, object>> mySortExpression = Expression.Lambda<Func<TInput, object>>(converted, param); UnaryExpression member = (UnaryExpression)mySortExpression.Body; return member.Operand.Type.FullName; } Now the problem I have is that many of the Queries return joined tables and I would like to sort on fields from the other tables. So when executing a query you can create a function that will assign the properties from other tables to properties created in the partial class. public static Account InitAccount(Account account) { account.CurrencyName = account.Currency.Name; account.PriceTypeName = account.DisplayPriceType.Name; return account; } So my question is, is there a way to assign the value from the joined table to the property of the current table partial class? i have tried using. from a in dc.Accounts where a.CompanyID == companyID && a.Archived == null select new { PriceTypeName = a.DisplayPriceType.Name}) but this seems to mess up my SortExpression. Any help on this would be much appreciated, I do understand that this is complex stuff.

    Read the article

  • Performing centralized authorization for multiple applications

    - by Vaibhav
    Here's a question that I have been wrestling with for a while. We have a situation wherein we have a number of applications that we have created. These have grown organically over a period of time. All of these applications have permissions code built into them that controls access to various parts of the application depending on whether the currently logged in user has the necessary permissions or not. Alongside these applications is a utility application which allows an administrator to map users to permissions for all applications - the way it works is that every application has code which reads this external database of the said utility application to check if the currently logged in user has the necessary permission or not. Now, the question is this. Should the user-permissions mapping information reside in and be owned by the applications themselves, or is it okay to have this information reside within an external entity/DB (as in this case the utility application's database). Part of me thinks that application permissions are very specific to the application context itself, so shouldn't be separated from the application itself. But I am not sure. Any comments?

    Read the article

  • Qt/C++ Error handling

    - by ShiGon
    I've been doing a lot of research about handling errors with Qt/C++ and I'm still as lost as when I started. Maybe I'm looking for an easy way out (like other languages provide). One, in particular, provides for an unhandled exception which I use religiously. When the program encounters a problem, it throws the unhandled exception so that I can create my own error report. That report gets sent from my customers machine to a server online which I then read later. The problem that I'm having with C++ is that any error handling that's done has to be thought of BEFORE hand (think try/catch or massive conditionals). In my experience, problems in code are not thought of before hand else there wouldn't be a problem to begin with. Writing a cross-platform application without a cross-platform error handling/reporting/trace mechanism is a little scary to me. My question is: Is there any kind of Qt or C++ Specific "catch-all" error trapping mechanism that I can use in my application so that, if something does go wrong I can, at least, write a report before it crashes?

    Read the article

  • NHibernate / multiple sessions and nested objects

    - by bernhardrusch
    We are using NHibernate in a rich client application. It is a pretty open application (the user searches for a dataset or creates a new one, changes the data and saves the data set. We leave the session open, because sometimes we have to lazy load some properties of the object (nested object structure). This means one big problem if we leave the session open, the db (MySQL) closes the connection and we are not able to find this out and it throws an exception (socket communication error) when accessing the database (we are thinking about testing the db connection before accessing the object - but this is not really optimal neither, the other option would be to set back the timeout of the db connection , but this just doesn't seem to well). So - is it possible to reconnect the session to a new database connection ? Another problem is it possible to get an object from one session and then re-attach it to another session ? (I often hear that session.lock should work for this - but this doesn't work so well in our application - so I ended up getting a "fresh" object from the session and copy the data over manually - which is a little bit cumbersome) Any ideas for this ?

    Read the article

  • ExecuteNonQuery: Connection property has not been initialized

    - by 20151012
    I get an error in my code: The error point at dbcom.ExecuteNonQuery();. Code(connection) public admin_addemp() { InitializeComponent(); con = new OleDbConnection(@"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=|DataDirectory|\EMPS.accdb;Persist Security Info=True"); con.Open(); ds.Tables.Add(dt); ds.Tables.Add(dt2); } Code (Save button) private void save_btn_Click(object sender, EventArgs e) { OleDbCommand check = new OleDbCommand(); check.Connection = con; check.CommandText = "SELECT COUNT(*) FROM employeeDB WHERE ([EmployeeName]=?)"; check.Parameters.AddWithValue("@EmployeeName", tb_name.Text); if (Convert.ToInt32(check.ExecuteScalar()) == 0) { dbcom2 = new OleDbCommand("SELECT EmployeeID FROM employeeDB WHERE EmployeeID='" + tb_id.Text + "'", con); OleDbParameter param = new OleDbParameter(); param.ParameterName = tb_id.Text; dbcom.Parameters.Add(param); OleDbDataReader read = dbcom2.ExecuteReader(); if (read.HasRows) { MessageBox.Show("The Employee ID '" + tb_id.Text + "' already exist. Please choose another Employee ID."); read.Dispose(); read.Close(); } else { string q = "INSERT INTO employeeDB (EmployeeID, EmployeeName, IC, Address, State, " + " Postcode, DateHired, Phone, ManagerName) VALUES ('" + tb_id.Text + "', " + " '" + tb_name.Text + "', '" + tb_ic.Text + "', '" + tb_add1.Text + "', '" + cb_state.Text + "', " + " '" + tb_postcode.Text + "', '" + dateTimePicker1.Value + "', '" + tb_hp.Text + "', '" + cb_manager.Text + "')"; dbcom = new OleDbCommand(q, con); dbcom.ExecuteNonQuery(); MessageBox.Show("New Employee '" + tb_name.Text + "'- Successfuly Added."); } } else { MessageBox.Show("Employee name '" + tb_name.Text + "' already added into the database"); } ds.Dispose(); } I'm using Microsoft Access 2010 as my database and this is stand alone system. Please help me.

    Read the article

  • Determining the chances of an event occurring when it hasn't occurred yet

    - by sanity
    A user visits my website at time t, and they may or may not click on a particular link I care about, if they do I record the fact that they clicked the link, and also the duration since t that they clicked it, call this d. I need an algorithm that allows me to create a class like this: class ClickProbabilityEstimate { public void reportImpression(long id); public void reportClick(long id); public double estimateClickProbability(long id); } Every impression gets a unique id, and this is used when reporting a click to indicate which impression the click belongs to. I need an algorithm that will return a probability, based on how much time has past since an impression was reported, that the impression will receive a click, based on how long previous clicks required. Clearly one would expect that this probability will decrease over time if there is still no click. If necessary, we can set an upper-bound, beyond which we consider the click probability to be 0 (eg. if its been an hour since the impression occurred, we can be pretty sure there won't be a click). The algorithm should be both space and time efficient, and hopefully make as few assumptions as possible, while being elegant. Ease of implementation would also be nice. Any ideas?

    Read the article

  • Error monitoring/handling on webservers

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • How To Correctly Specify A Default Value For A String Field In A PHP/MySQL Prepared Statement

    - by Joshua
    I'm trying to debug some auto-generated code, but I am a mySQL noob. Everything goes fine ultil the "prepare" line below, and then for some reason $mysqli_stmt is false, yielding the stated error. Could it have something to do with the SQL_MODE = 'ANSI'? The failure seems to have something to do with the string 'xxx' below, but it still happens no matter what I change it to. This value is meant to be a default value for the TickerDigest field, but strangely if I change 'xxx' to 'c_u_TickerDigest', then it suddenly works, but the TickerDigest field is inserted as 'null' when I look in the database. $mysqli = mysqli_init(); $mysqli->options(MYSQLI_INIT_COMMAND, "SET SQL_MODE = 'ANSI'"); $mysqli->real_connect(SR_Host,SR_Username,SR_Password,SR_Database) or die('Unable to connect to Database'); $sql_stmt = 'INSERT INTO "t_sr_u_Product"("c_u_Name", "c_u_Code", "c_u_TickerDigest") VALUES (?, ?, "xxx")'; $mysqli_stmt = $mysqli->prepare($sql_stmt); Fatal error: Uncaught exception 'Exception' with message 'INSERT INTO "t_sr_u_Product"("c_u_Name", "c_u_Code", "c_u_TickerDigest") VALUES (?, ?, "xxx"): prepare statement failed: Unknown column 'xxx' in 'field list'' in P:\StarRise\SandBox\GateKeeper\Rise\srIProduct.php on line 18 I'm hopeful what's going wrong is fairly simple, since I'm almost completely ignorant about SQL.

    Read the article

  • How can I keep curl output out of mail from my cronjob?

    - by Russell C.
    I have written a Perl script that runs as a daily crontab job that uploads files to Amazon S3 via CURL. I want the output of the cron job emailed to me which works fine but I don't want that email to include messages related to the CURL upload (only those message my script is outputting). Here are the CURL related messages I'm seeing in the daily email right now: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 230M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 230M 0 0 0 544k 0 1519k 0:02:35 --:--:-- 0:02:35 1807k 0 230M 0 0 0 1744k 0 1286k 0:03:03 0:00:01 0:03:02 1342k 1 230M 0 0 1 2880k 0 1219k 0:03:13 0:00:02 0:03:11 1250k 1 230M 0 0 1 4016k 0 1198k 0:03:17 0:00:03 0:03:14 1218k 2 230M 0 0 2 5168k 0 1186k 0:03:19 0:00:04 0:03:15 1202k 2 230M 0 0 2 6336k 0 1181k 0:03:19 0:00:05 0:03:14 1157k 3 230M 0 0 3 7488k 0 1177k 0:03:20 0:00:06 0:03:14 1147k 3 230M 0 0 3 8592k 0 1167k 0:03:22 0:00:07 0:03:15 1142k 4 230M 0 0 4 9744k 0 1166k 0:03:22 0:00:08 0:03:14 1145k 4 230M 0 0 4 10.6M 0 1163k 0:03:23 0:00:09 0:03:14 1142k 5 230M 0 0 5 11.7M 0 1161k 0:03:23 0:00:10 0:03:13 1140k 5 230M 0 0 5 12.8M 0 1158k 0:03:23 0:00:11 0:03:12 1133k 6 230M 0 0 6 13.9M 0 1155k 0:03:24 0:00:12 0:03:12 1138k 6 230M 0 0 6 15.0M 0 1155k 0:03:24 0:00:13 0:03:11 1138k 7 230M 0 0 7 16.1M 0 1152k 0:03:25 0:00:14 0:03:11 1131k 7 230M 0 0 7 17.2M 0 1152k 0:03:25 0:00:15 0:03:10 1132k 7 230M 0 0 7 18.4M 0 1152k 0:03:24 0:00:16 0:03:08 1140k I am using a simple Perl system() call to invoke CURL. Does anyone know what command line argument I can supply CURL to turn off the reporting of the upload progress?

    Read the article

  • Why does this program require MSVCR80.dll and what's the best solution for this kinda problem?

    - by Runner
    #include <gtk/gtk.h> int main( int argc, char *argv[] ) { GtkWidget *window; gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_widget_show (window); gtk_main (); return 0; } I tried putting various versions of MSVCR80.dll under the same directory as the generated executable(via cmake),but none matched. Is there a general solution for this kinda problem? UPDATE Some answers recommend install the VS redist,but I'm not sure whether or not it will affect my installed Visual Studio 9, can someone confirm? Manifest file of the executable <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> </assembly> It seems the manifest file says it should use the MSVCR90, why it always reporting missing MSVCR80.dll? FOUND After spending several hours on it,finally I found it's caused by this setting in PATH: D:\MATLAB\R2007b\bin\win32 After removing it all works fine.But why can that setting affect my running executable from using msvcr90 to msvcr80 ???

    Read the article

  • Tab Content Does Not Refresh After First Click of Like Button

    - by Adam
    I've implemented a very simple "like guard" for a facebook tab, and am running into an issue with my test users. Multiple testers are reporting that when they open a tab and click the "like" button, they do not always get a page refresh (so the like guard does not disappear until they do a manual reload). This is using facebook's like button at the top of the page, not one I've coded up myself. As a sanity check, I enabled some simple logging on my server and have been able to recreate the issue - I hit "like" or "unlike" but there seems to be no request made to my index.php page, so definitely no refresh happening. I'm aware of this old bug https://developers.facebook.com/bugs/228778937218386 but this one seems different. For starters, after the first click of the "like" button, if I just continue clicking unlike/like/.... then the refresh happens automatically, as expected. What's especially weird is that if I reload the page after the first failed refresh, the refreshes start working again as expected, ie the first update to my like status triggers a page refresh. Some possibly (?) relevant info: My Tab is part of a test page, and is unpublished I am only using http hosting for the tab content, since my https isn't set up yet So far I've just tested with other admins - so maybe user role affects this? Curious to see if anyone has run into this issue before.

    Read the article

  • TextBox change is not saved to DataTable

    - by SeaDrive
    I'm having trouble with a simple table edit in a Winform application. I must have missed a step. I have a DataSet containing a DataTable connected to a database with a SqlDataAdapter. There is a SqlCommandBuilder on the SqlDataAdapter. On the form, there are TextBoxes which are bound to the DataTable. The binding was done in the Designer and it machine-produced statements like this: this.tbLast.DataBindings.Add(new System.Windows.Forms.Binding("Text", this.belkData, "belk_mem.last", true)); When I fill the row in the DataTable, the values from the database appear in the textboxes, but when I change contents of the TextBox, the changes are apparently not being going to the DataTable. When I try to save change both of the following return null: DataTable dtChanges = dtMem.GetChanges(); DataSet dsChanges = belkData.GetChanges(); What did I forget? Edit - response to mrlucmorin: The save is under a button. Code is: BindingContext[belkData, "belk_mem"].EndCurrentEdit(); try { DataSet dsChanges = belkData.GetChanges(); if (dsChanges != null) { int nRows = sdaMem.Update(dsChanges); MessageBox.Show("Row(s) Updated: " + nRows.ToString()); belkData.AcceptChanges(); } else { MessageBox.Show("Nothing to save.", "No changes"); } } catch (Exception ex) { MessageBox.Show("Error: " + ex.Message); } I've tried putting in these statements without any change in behavior: dtMem.AcceptChanges(); belkData.AcceptChanges();

    Read the article

< Previous Page | 745 746 747 748 749 750 751 752 753 754 755 756  | Next Page >